►
From YouTube: Kubernetes by Keytar - Jan Kleinert, Red Hat
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
In
this
talk,
we're
going
to
cover
four
different
sections,
we'll
start
by
talking
about
what
kubernetes
is
in
the
first
place,
then
we'll
look
at
the
kubernetes
api
and
resource
types.
Third,
we'll
have
a
demo
where
I
use
my
keytar
and
the
web
MIDI
API
to
interact
with
the
kubernetes
api
to
deploy
and
manage
some
resources
on
a
kubernetes
cluster
and
then
finally,
I'll
share
some
resources
where
you
can
learn
more
so
I'll
get
out
of
the
way
now
and
we'll
go
on
with
the
presentation.
A
So
what
is
kubernetes
kubernetes
is
an
open-source
platform
for
managing
containerized
workloads
at
scale
as
a
container
orchestration
system.
It
can
help
you
automate
application
deployment,
scaling
and
management.
In
other
words,
you
can
cluster
together.
Groups
of
machines
or
hosts
running
containers
and
kubernetes
will
help
you
easily
and
efficiently
manage
those
clusters.
The
kubernetes
cluster
consists
of
different
kubernetes
objects.
Things
like
pods
services
and
deployments
which
we'll
learn
about
later.
These
kubernetes
objects
are
persistent
entities
that
represent
the
state
of
your
cluster
and
you
can
manage
them
with
the
kubernetes
api.
A
Here's
a
list
of
some
of
the
most
common
objects
that
kubernetes
implements
pods
deployments,
namespaces
services
and
so
on,
and
you
work
with
each
of
these
using
the
kubernetes
api.
But
you
don't
have
to
just
use
the
raw
api.
You
can
also
use
command
line
tools
like
cube,
CTL
or
cube
cut',
all,
depending
on
how
you
like
to
pronounce
it
which
wrap
the
kubernetes
api.
Kubernetes
distributions,
like
open
chests,
also
have
a
web
interface
that
you
can
use
to
simplify
the
management
and
deployment
of
applications
on
your
cluster.
A
A
There
are
also
rest
clients
for
many
languages
that
you
can
use
and
our
demo
is
going
to
use
a
node,
jsq
Burnet
ease
rest
client
in
kubernetes.
Api
object.
Primitives
include
these
fields
that
you
see
here
kind
telling
you
what
kind
of
resource
this
is.
That's
the
object,
type
pod
deployment,
for
example,
an
API
version
metadata,
which
can
include
fields
like
the
name
of
the
object,
a
spec.
A
This
is
where
you
give
your
desired
state
and
then
a
status,
which
is
what
the
kubernetes
cluster
reports
back
to
you,
letting
you
know
how
close
you've
gotten
to
reaching
that
desired
state
in
your
spec.
Many
of
the
objects
will
have
more
fields
than
this,
but
in
almost
every
case
an
API
object
will
at
least
have
these
five
fields.
A
Let's
talk
now
about
some
of
these
basic
resource
types,
we'll
start
with
a
pod.
A
pod
is
a
group
of
one
or
more
co-located
containers.
In
many
cases
a
pod
is
just
running
one
container,
but
it
is
possible
for
a
pod
to
have
more
than
one
container
running.
For
example,
in
the
case
of
a
sidecar,
pods
are
also
the
minimum
unit
of
scale
in
kubernetes.
You
can
scale
up
and
down
the
number
of
pods
that
you
have
running.
Here's
an
example
of
a
pod
spec
on
the
right.
A
This
example
is
using
Hamel,
but
it's
interesting
to
note
that
you
can
also
use
JSON
and
you
can
use
the
two
interchangeably
when
you're
active
interacting
with
kubernetes.
So
in
this
example,
you
can
see
that
the
kind
is
pod.
Our
API
version
is
v1
in
our
metadata.
We
have
some
information,
including
the
name
of
this
pod,
which
will
be
hello,
k8s
and
then
importantly,
let's
notice
that
we
have
some
labels.
We
have
a
label
which
is
just
a
key
value
pair
where
we
have
run
and
then
a
value
of
Hello
k8s.
A
A
So,
let's
take
a
look
at
a
demo
now
Before
we
jump
into
the
demo.
Let
me
explain
a
little
bit
about
how
this
demo
is
going
to
work.
I
have
my
keytar
here,
it's
connected
to
my
laptop
with
this
USB
MIDI
cable,
and
so
when
I
play
notes
on
the
keytar.
It's
sending
those
MIDI
messages
to
my
computer
and
therefore
to
my
browser,
which
is
listening
for
certain
notes
to
be
played,
depending
on
what
note
is
played
it's
going
to
trigger
a
request,
the
kubernetes
api,
to
do
something
on
my
cluster.
A
You
should
probably
take
a
minute
to
talk
about
MIDI
in
the
web.
Midi
API
MIDI
stands
for
musical
instrument,
digital
interface
and
it's
a
technical
standard.
That's
been
around
since
the
1980s.
It's
designed
for
communication
between
digital
musical
instruments,
like
my
keytar
audio
devices,
computers
and
so
on.
Communication
and
MIDI
happens
through
MIDI
messages
in
our
demo.
App
we're
listening
for
a
particular
type
of
message
called
a
channel
voice
message
and
it
consists
of
three
numeric
values.
The
first
is
the
type
of
event.
It
is
we're.
A
A
Second
piece
of
information
is
the
note
number
so
note
number
can
range
from
0
to
127
and
to
give
you
a
frame
of
reference.
Middle
C
is
60
on
that
scale.
The
third
value
is
velocity,
which
corresponds
to
how
hard
I'm
pressing
the
key
on
the
keytar
very,
very
softly,
would
be
towards
the
0
end
of
the
range
and
extremely
hard
extremely
loud
notes
would
be
higher
up
in
the
range.
So
every
time,
I
press
a
key
on
the
keytar.
It's
sending
a
channel
voice
message
that
includes
all
of
these
pieces
of
information.
A
A
Then,
if
you
do
have
access,
you
can
request
that
access
and
listen
for
MIDI
success
and
MIDI
failure
messages.
The
web
app
that
we're
using
for
our
demo
is
using
the
kubernetes
api,
the
web
MIDI
API
and
for
the
front-end
visualizations
it's
using
react
and
SVG.
So
in
this
first
case,
I'm
gonna
play
a
certain
series
of
notes
which
will
then
trigger
a
pod
to
be
created.
It's
a
little
overkill,
but
it's
really
fun.
So,
let's
try
it
out
it's
over
here,
vs
code.
You
can
see
I've
got
a
pod.
A
Spec
in
this
case
happens
to
be
in
JSON,
as
I
mentioned
before.
These
can
be
used,
interchangeably,
JSON
and
yamo.
This
I
believe
is
exactly
the
same
pod
spec
as
we
showed
in
the
presentation
just
in
a
different
format,
so
we're
gonna
play
a
certain
riff
on
the
keytar
to
deploy
this
pod
on
our
cluster.
Let's
take
a
look,
alright,.
A
Alright,
so
with
that
riff
we
have
launched
a
pod
on
our
cluster.
You
can
see
that
as
the
pod
came
up,
it
turned
from
yellow
to
green,
and
so
what
was
happening
there
is
yellow
was
in
pending
state
as
the
container
is
being
brought
up
and
then
once
it
was
ready.
We
turn
it
green
so
that
you
can
visualize
that
that
pot
is
ready
and
running
okay.
So
we
have
our
pod
up
and
running.
That's
great,
but
that's
not
very
exciting.
A
Next,
we're
gonna
take
a
look
at
services
which
give
us
a
way
to
more
easily
interact
with
that
pod
from
within
the
cluster.
All
right.
Let's
move
on
to
service,
so
a
service
you
can
think
of
it
like
a
load
balancer.
It
acts
as
a
single
endpoint
for
a
collection
of
replicated
pods.
In
our
previous
example,
we
just
had
a
single
pod.
Imagine
what
would
happen
if
we
tried
to
access
that
pod,
but
it
had
crashed
and
we
brought
up
a
new
one
which
had
a
different
IP
address.
A
That
would
be
a
lot
to
manage
or
imagine
if
we
had
multiple
pods
running
and
we
wanted
to
be
able
to
balance
load
across
those.
That's
where
a
service
can
be
helpful.
Here's
an
example
of
what
the
spec
first
service
looks
like
again.
We
have
our
kind
service,
our
metadata.
We
have
a
name
for
our
service,
just
like
we
did
for
the
pod.
We
also
are
including
the
same
label,
run
hello
k8s,
and
here
the
spec
looks
a
little
bit
different.
A
So
in
respect
we
have
the
ports
that
we're
using
for
this
service,
and
then
we
have
a
selector.
The
selector
is
very
important.
The
selector
is
a
label
they're
going
to
look
for
in
the
pods
that
are
running
and
that's
how
we
associate
certain
pods
with
our
service.
Our
service
will
act
on
any
pods
that
have
this
run
hello,
k8s
label.
Let's
take
a
look
at
a
demo
where
we
add
a
service
to
our
cluster.
So
looking
at
services,
now
we
are
back
in
vs
code.
A
And
there
we
go
our
service
is
now
created.
You
can
see
this
line
connecting
the
pod
and
the
service
and
that's
to
represent
that
connection
between
the
two
which
is
established
by
that
selector.
The
label
run
hello
k8s.
So
if
we
had
more
than
one
pod
running
with
that
same
label,
you
would
see
that
the
service
could
direct
traffic
to
either
of
those
pods
and
you
could
have
more
than
two.
A
I
could
contact
that
service
and
it
will
assign
my
request
to
any
one
of
the
pods
that
are
associated
with
the
service
in
this
case,
there's
just
one
pod,
so
there's
only
one
place
for
it
to
go,
but
if
we
did
have
multiple
pods,
it
would
handle
that
routing
of
requests
before
us
now,
typically
within
a
kubernetes
cluster
you're,
not
just
manually
spending
up
pods
part
of
the
reason
for
that
is.
If
this
pod
were
to
crash,
that's
it
it
wouldn't
come
back.
We
would
have
nothing
to
access
with
our
service.
A
One
of
the
benefits
of
kubernetes
is
it's
self-healing
capabilities
and
to
take
advantage
of
those
you
need.
Some
of
these
different
object,
types
like
deployments,
which
we'll
look
at
next
so
now
you've
seen
what
we
can
do
with
the
pod
and
the
sir,
but
it's
not
really
standard
practice
to
just
deploy
pods
on
their
own.
One
of
the
wonderful
things
about
kubernetes
is
that
it
has
these
kind
of
auto
recovery,
auto
healing
capabilities,
but
you
don't
get
that
when
you
deploy
a
pod
on
its
own.
A
You
start
getting
some
of
those
benefits
when
you
use
things
like
a
deployment,
but
deployment
is
going
to
help
you
specify
container
runtime
requirements
in
terms
of
pods,
so,
for
example,
with
the
deployment
we
have
here,
we're
specifying
a
number
of
replicas.
We
say
we
want
one
replicas.
That
means
one
instance
of
this
container
running
in
a
pod,
we're
giving
it
some
labels
again
run
hello
k8s,
we
see
just
like
we
did
with
the
pod
in
the
service
and
then
within
this
deployment
we
have
a
template
which
we
didn't
see
before.
A
So
this
template
includes
some
metadata,
like
our
labels
and
also
has
a
container
spec
embedded
within
it.
So
in
this
case
the
deployment
we're
going
to
want
run
one
replica
of
our
container.
That
is
running
this
image
that
we've
specified
here.
Let's
take
a
look
at
how
that
works,
but
first
before
we
move
on
let's
clean
up
after
myself,
by
getting
rid
of
this
pod,
we're
gonna
keep
the
service
there,
but
we'll
get
rid
of
the
pod.
A
A
All
right,
let's
move
on
to
deployments,
so
here
is
our
deployment
like
we
saw
in
the
slides.
We
are
asking
for
one
replica,
here's
our
container
that
we
want
to
run
so,
let's
give
it
a
shot.
Okay,
so
to
get
our
deployment
up
and
running
on
the
cluster,
we're
going
to
play
a
part
of
final
countdown.
This
is
honestly
the
hardest
part
of
the
demo
is
playing
these
rifts
correctly,
so
I
hope
it
be.
A
So
when
our
deployment
was
created,
you
can
see
it's
it's
visualized
here
and
then
that
pod
was
also
created
with
the
deployment.
You
can
see
that
the
pods
name
is
a
lot
different
now
it's
much
longer
and
that
pod
name
was
auto
generated
by
the
deployment
and
you'll
notice
that
it
automatically
got
that
line
connecting
it
to
the
service
and
the
reason
this
works
is
because
we
had
all
the
proper
labels
and
selectors
set
up
in
our
deployment
and
service.
A
Now
this
isn't
very
exciting
right.
Here's
our
deployment
with
just
a
single
pod
with
our
service.
What
would
this
look
like
if,
instead
of
having
one
replica,
we
wanted
to
have
three?
Let's
try
that
now
and
this
time
instead
of
doing
that
with
the
keytar,
let
me
show
you
what
it
looks
like
if
we
use
the
cube,
CTL
command
line
tool
just
to
give
you
another
view
into
how
people
can
interact
with
kubernetes
clusters.
A
Okay,
so
we're
gonna
go
side
by
side
with
the
demo
app
on
one
side
and
our
terminal
on
the
other,
and
what
we're
going
to
do
is
use
the
cube,
CTL
command
line
tool
to
scale
up
our
deployment
from
one
replica
to
three
replicas
and
the
way
we
would
do
that
is
this
cube
ztl
scale.
We
tell
it
the
name
of
the
deployment
we
want
to
scale,
so
the
deployment
/hello
hey
s,
and
then
we
call
it
replicas
equals
three.
A
So
as
we
do
this,
what
you
should
see
is
that
almost
immediately
these
two
new
pods
are
created
they're
in
that
yellow,
pending
state
until
that
container
is
up
and
running.
But
now
we
have
three
pods
running
in
our
deployment.
That
deployment
is
managing
these
three
pods
and
our
service
is
also
associated
with
all
three
of
those
pods.
So
if
I
were
to
make
a
request
to
this
service
within
our
cluster,
it
would
get
routed
to
one
of
those
pods,
but
it
doesn't
matter
to
me
as
a
user
which
one
it
is.
A
It
also
doesn't
matter
if
one
of
these
were
to
crash,
because
one
the
service
can
route
me
to
another
pod
and
to
kubernetes
would
work
to
bring
up
another
pod,
so
that
would
always
have
three
running
so
for
fun.
Let's
take
a
look
at
what
that
self-healing,
auto
recovery
process
looks
like
you
heard
before,
when
we
played
you
know
the
intro
to
Beethoven's
fifth
to
kill
the
pod
before.
Let's
do
the
same
thing
now
and
remember
it's
going
to
mark
that
pod
for
deletion.
A
And
there
you
go,
you
can
see
this
new
one
being
created
and
then
once
the
pod
that
we
did
delete
is
completely
deleted
and
terminated
and
cleaned
up
after
you'll
see
that
that
disappears
and
we're
back
to
our
desired
state
of
three
pods.
Just
like
we
asked
for
now
with
all
of
the
kubernetes
api
commands
that
I
issued
using
the
keytar
in
the
API,
we
could
have
done
all
of
these
things
via
cube
CTL
as
well
or
in
the
case
of
our
openshift
cluster.
A
We
could
use
the
web
console
to
view
and
interact
with
these
resources.
I'll
give
you
a
quick
look
here.
This
is
the
topology
view
within
the
developer
perspective
of
the
web
console.
You
can
see
our
deployment
here.
If
I
click
on
to
this,
you
can
see,
we've
got
three
pods
running.
I
can
also
show
pod
count.
If
that's
helpful
right
there,
you
can
see
our
three
pods
that
are
running
if
I
wanted
to
look
at
more
details.
A
For
example,
if
I
wanted
to
look
at
more
details
on
the
deployment
itself,
I
can
always
go
in
and
look
at
the
yeah
mo
for
it
here.
But
a
lot
of
the
time
I
like
to
work
in
the
topology
view
where
I
can
get
to
most
of
the
things
that
I
need
just
with
a
couple
clicks,
including
pod
logs.
If
you
wanted
to
see
what
was
running
there,
alright,
so
that
concludes
our
demo
and
we'll
pop
back
over
into
the
presentation
now.
A
So
we've
only
looked
at
a
few
examples
of
kubernetes
objects,
pods
services
deployments
and
so
on.
There's
so
much
more
that
you
can
learn.
But
hopefully
this
gives
you
a
good
starting
point
and
helps
you
visualize
how
these
objects
work
together
on
a
cluster,
we've
also
talked
a
little
bit
about
the
kubernetes
api
and
I've
shown
you
how
you
can
interact
with
the
API
to
take
actions
on
a
cluster
or
how
you
can
use
cube,
CTL
command
line
tool
to
interact
as
well.
A
At
this
point,
if
you'd
like
to
learn
more,
we
have
a
lot
of
resources
available.
I've
listed
a
few
kubernetes
and
open
ship
resources
here,
kubernetes
I/o
is
the
official
kubernetes
website
has
tons
of
great
documentation.
Examples
and
more
kubernetes
by
example.com
is
a
great
resource
with
some
practical
examples
of
how
you
can
use
and
interact
with
different
object
types
learn:
dot.
Openshift
comm
is
a
self-paced
interactive
learning
platform
where
you
can
learn
about
kubernetes
and
OpenShift
in
a
hands-on
way.
You
get
quick
access
to
and
openshift
kubernetes
cluster,
where
you
can
experiment
and
learn.
A
The
final
two
links
are
to
some
more
resources
on
developers:
debt,
Red,
Hat,
comm,
there's
a
page
for
kubernetes
resources
as
well
as
one
for
openshift
and
there's
a
lot
of
great
material
in
there
to
help
you
learn
more.
Next,
we
have
the
github
repos.
The
first
one
here
is
the
repo
for
the
kubernetes
by
keytar
demo
app.
If
you
want
to
take
a
look
at
that
below
that
we
have
the
Go
Daddy
kubernetes
client.
This
is
the
node.js
kubernetes
client
that
I
used
in
my
demo.
A
There
is
also
an
open
chef,
tress
client
if
you
want
to
interact
with
some
of
those
openshift
specific
resources
that
are
available
on
top
of
kubernetes,
so,
for
example,
things
like
routes
and
projects,
and
things
like
that.
Finally,
if
you'd
like
to
get
in
touch
with
me,
I'd
love
to
hear
from
you,
you
can
find
a
link
to
my
Twitter
and
I
get
hub
accounts
here.
Thank
you
for
watching
hope.
You
learned
something
about
kubernetes
and
the
web
MIDI
API
and
had
a
little
bit
of
fun
thanks.
So
much
for
your
time.