►
From YouTube: Certs Magic with Saiyam - Episode 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Hello,
everyone
and
welcome
to
cloudnative.tv.
This
is
the
search
magic
show
with
siam
and
today
is
very
exciting
day.
Before
we
start,
let
me
read
out
the
cncf
code
of
conduct,
so
this
is
an
official
live
stream
of
cncf.
As
such
subject
to
cncf
code
of
conduct,
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
the
violation
of
the
code
of
conduct.
Basically,
please
be
respectful
of
all
your
fellow
participants
and
presenters.
B
So
this
is
streaming
live
on
twitch
cloud
native
tv
and
make
sure
you
hit
that
follow
button
and
make
sure
you
make
the
stream
interactive,
so
search.
Magic
show
is
obviously
all
about
certifications
and
in
the
last
stream
I
discussed
about
the
importance
of
certifications,
what
kubernetes
certifications
exist
and
where
we
are
with
respect
to
the
certifications.
How
many
are
there?
What
is
the
course
curriculum
and
how
things
work
during
the
exam?
So
all
these
things
were
discussed
in
the
previous
one.
I
have
posted
the
youtube
link
for
that.
B
If
anybody
wants
to
check
that
out
and
in
today's
session
I'm
joined
by
tim,
who
is
an
official
instructor
at
linux
foundation
and
welcome
tim
to
the
show-
please
introduce
yourself
to
the
community.
A
Hi
there
thanks
thanks
for
having
me.
I
appreciate
it
as
mentioned.
My
name
is
tim
tim
serwitz,
I'm
actually
the
training
program
director
for
the
linux
foundation,
and
I
I'm
also
the
author
of
the
three
classes.
We
offer
instructor-led
training
courses
in
kubernetes
that
would
line
up
with
the
cka,
the
ckad
and
the
cks
exams.
B
Awesome
so
glad
to
have
you
tim,
and
this
will
be
a
really
exciting
stream,
because
we
have
tons
of
learning.
So
basically,
if,
if
you
have
ever
thought
of,
you
know
starting
with
your
cka
journey-
and
you
know
learning
about
that,
then
probably
this
episode
would
be
the
best
one
to
start
with,
because
we
are
actually
starting
kind
of
from
scratch
and
we'll
be
discussing
what
kubernetes
is
it's
architecture,
components,
what
are
what's
the
yaml
components?
B
What
was
the
name
space
and
all
that-
and
we
also
have
labs
that
we'll
do
live
so
live
labs
will
be
done
so
that
to
show
you
how
things
are
set
up
actually
and
obviously
it
it
is
one-to-one
mapping
with
the
certification
exam,
because
you
need
to
have
an
environment.
B
Obviously,
for
practice
you
can
do
that
and
also
there
can
be
scenarios
related
to
that
on
on
the
similar,
you
know
concepts,
so
today's
curriculum
that
that
we
are
targeting
is
the
cluster
architecture
and
installation
and
the
configuration
not
touching
much
on
the
configuration.
But
this
is
the
this
is
the
from
the
curriculum.
This
is
what
we
are
targeting
to
achieve
from
this
particular
episode,
so
yeah
without
wasting
any
time
we'll
get
started
just
make
sure
to
follow
that
cloud
native
tv.
B
Yes,
one
last
point
is
there
are
two
giveaways
50
discount
coupons
that
I'll
be
doing
at
the
end,
so
whosoever
is
most
interactive
in
the
chat.
Asking
lot
of
questions.
Making
it
interactive
would
get
those
two
coupons.
So
with
that
maybe
tim
we
can.
We
can
start
with
like
what
kubernetes
is
kubernetes
introduction.
A
Sure
sure
well
I'll
yeah,
happily
thanks
very
much
so
starting
off
before
I
start
sharing
my
screen.
One
of
the
things
I
always
like
to
cover
is:
what
is
it
that
kubernetes
solves
that
previous
ways
of
doing
things
didn't
and-
and
this
is
this-
is
probably
the
biggest
takeaway.
If
you
don't
get
much
else,
it's
that
kubernetes
is
not
just
another
vm
management
tool,
and
I
want
to
say
that
before
I
share
anything,
because
I
want
to
make
sure
everybody
kind
of
gets
that
this
is
the
biggest
hurdle.
A
A
It
also
means
that
our
applications
need
to
be
different,
so
part
of
being
a
admin
for
a
cluster,
of
course,
is
the
care
and
feeding
the
installation,
the
various
things
that
go
into
it,
but
it's
also
feeding
back
to
the
other
people
in
the
organization,
the
the
developers,
the
other
folks
you
might
be
interacting
with.
So
they
also.
A
That
this
is
not
just
a
another
vm
management
tool,
it's
it's
distinct,
which
is,
I
think
why
talking
about
architecture
is
probably
you
know
that
perhaps
it's
it's
just
me,
but
I
I
think
this
might
be
part
of
the
the
heaviest
lift
the
the
most
difficult
stuff
to
get,
but
also
the
most
important
so
with
that.
Now
that
I've
kind
of
hopefully
impressed
it
upon
you.
Let
me
go
ahead
and
share
my
screen
and
share
screen
button
and
application
window.
A
Oh,
it
shows
up
both
screens.
Okay.
Well,
let
me
try
that
a
different
way.
So
let
me
share
this
pdf
with
you
guys
for
now
and
we'll
share
the
other
one.
So
hopefully
you
are
seeing
it
am.
I
am
indeed
you
are
good.
Okay
good
deal,
so
this
is
a
page
from
our
course.
The
kubernetes
admin
course
lfs
458,
and
it's
it's
basically
the
first
chapter
and
it's
one
of
the
things
that
we
get
into
and
the
kubernetes
is
orchestration
software.
A
So
when,
when
push
comes
to
shove,
why
do
we
care
about
it?
Well,
it
orchestrates
think
of
orchestra
everybody's
playing
the
same
music
at
the
same
time.
Well,
the
kubernetes
is
an
orchestration
tool
for
containers
so
that
what
that
does
for
us.
What
we're
looking
at
here
with
this
graphic
is
on
the
left.
We
have
our
control
plane
on
the
right.
A
We
have
a
worker
and
those
are
some
of
the
the
terms
that
we
use
we're
moving
towards
inclusive
naming
so
be
aware
that
in
some
of
the
commands
you
might
see
it
called
other
stuff.
So
some
of
the
commands
inside
of
kubernetes
still
use
the
previous
name,
but
we're
moving
to
control
plane,
which
I'm
going
to
use
shorthand
of
cp,
just
kind
of
easier
to
say
and
then
a
series
of
workers
who
might
also
be
called
minions
in
some
documentation
you
run
across.
A
So
the
the
nice
thing
is
that
kubernetes
itself
follows
the
same
paradigm.
It's
a
decoupled
transient,
microservice
based
tool
and
that's
what
we
want
to
deploy,
not
we're
moving
from
vms
to
containers,
and
it's
not
just
well
I'll
containerize,
my
vm.
We
also
want
it
to
be
decoupled,
meaning
that
it's
not
reliant
on
somebody
else.
It's
transient
in
that
the
various
components
will
be
killed
on
a
regular
basis.
This
is
usually
the
biggest
stumbling
point
when
you're
like.
A
Yes,
I'm
going
to
kill
this
container
today
three
times
and
I'm
going
to
move
it
to
a
different
node.
If
you
were
to
go
to
a
legacy,
dba
and
say,
I'm
going
to
terminate
your
database
three
times
today,
they
probably
would
have
an
issue
with
it.
You'd
have
you'd
have
some
some
long
conversations
about
that,
but
this
is
what
I'm
trying
to
get
to
is
that
the
whole
nature
of
this
setup
is
about
going
away
from
a
vm
management
into
a
different
architecture.
A
So
traditionally
we
had
with
legacy
environments,
we
had
legacy
apps
that
were
monolithic
and
then
finely
tuned
for
the
equipment
we
had.
So
whatever
you
know
you
had
a
an
8
processor
box
and
32
gig
of
memory.
Well,
you
know
chances
are
the
load
would
eventually
get
to
the
point
that
you
had
to
do
some
tuning
and
tweaking
and
optimization.
A
The
people
at
google
started
a
project
called
borg
and
kubernetes
has
actually
been
around
for
well
almost
20
years
now,
but
the
first
15
of
it
were
as
this
in
internal
somewhat
secret
project
called
borg.
Google
used
it
to
run
their
business
around
the
world
instead
of
going
towards
mainframes,
which
a
lot
of
other
big
companies
have
done,
they
were
say:
well,
we
can't
we'd
have
to
keep
buying
bigger
and
more
expensive
boxes,
so
they
went
the
other
direction.
A
Now
we're
going
to
go
with
commodity
systems,
which
is,
I
think,
a
polite
way
of
saying
modestly,
priced
or
low
end,
where
you're
not
investing
in
bigger
and
bigger
servers,
with
fancier
center
planes
and
and
more
needs
for
for
high
speed
buses
and
complexity
and
ongoing
cost.
Now
we're
talking
customer
replaceable
units
that
have
rack
units
that
I
can
swap
out.
A
So
what
we
want
is
our
application,
whatever
that
application
is
to
run
across
lots
and
lots
of
systems
that
each
one
of
them
doesn't
necessarily
have
to
be
important
and
that's
what
borg
was
really
about
doing
so
if
you've
ever
used
a
google
product,
gmail
and
g
maps,
maybe
anywhere
in
the
world,
you
were
probably
leveraging
borg
at
some
point
so
when
when
they
gave
it
away,
they
gave
away,
of
course,
not
everything,
just
the
the
core
of
it
and
that's
what
became
kubernetes,
which
is
pilot
in
greek.
A
Technically,
it's
the
the
oarsmen,
the
person
holding
the
wooden
or
in
the
in
the
water,
but
we
call
it
pilot
or
helmsman.
The
person
steering
the
boat,
it's
orchestration
software
and
the
purpose
behind
this
orchestration
software
is
to
have
an
application
running
across
lots
and
lots
of
nodes.
So
we
don't
want
big
nodes.
We
want
lots
and
lots
of
commodity
nodes.
We
aggregate,
then
all
the
processors
and
all
the
memory
together
to
say.
Well,
my
computing
environment
is
capable
of
having
this
app
with
us.
You
know
512
different
processors
working
on
it.
A
How
we
get
there
is
by
running
our
containers,
which
are
micro
services,
so
we're
not
looking
for
large
monolithic
apps.
We
want
to
divide
that
up
into
various
tasks
and
then
run
that
to
different
places,
so
we
would
have
instead
of
a
monolithic
app
that
might
do
everything.
Okay,
this
is
the
front
end
and
it
accepts
a
an
api
call,
and
then
I
have
a
separate
authentication,
microservice
someplace
else,
and
then
I
have
a
database
and
I
have
something
else,
but
you
divide
it
up
by
the
tasks.
A
Then
there
really
isn't
a
definition
of
how
small
a
micro
server
should
be,
but
we
want
to
make
sure
it's
scalable
and
durable
and
that's
where
the
decoupling
comes
in,
so
we
want
a
transient,
meaning
I'm
willing
to
go
away
and
be
regenerated
or
whoever.
I
was
speaking
to
I'll
wait
for
them
to
come
back.
We
want
to
write
that
into
our
code
and
then
the
orchestration
software
that
we're
running,
which
is
kubernetes,
handles
that
it
says.
A
Well,
I
will
take
care
of
it
if
you
are,
if
you
go
away
I'll,
give
you
a
new
one,
and
so
that's
where
we're
going.
That's
the
high-level
view
of
why
we
care
about
kubernetes
what
it
does
for
us
and
why
it's
not
just
another
vm
management
tool
if
we
kind
of
get
that
understanding
that
we're
going
away
from
monolithic
apps
into
decoupled
transient
microservices,
and
why
then,
as
we
talk
about
the
components
that
do
it,
hopefully
it
will
make
more
sense.
A
So,
on
the
left
hand
side
here,
we
see
our
control
plane
and
the
most
of
the
stuff
that
we
see
in
this
graphic
are
actually
containers
themselves.
There's
one
exception
to
that
and
that's
this
container
called
cubelet
which
we'll
talk
about
in
a
sec,
but
let's
follow
a
call
from
the
outside
world
through
the
process
of
perhaps
making
a
pod
and
we'll
define
and
and
talk
about
the
components
along
the
way.
A
So,
on
the
left
hand
side,
we
see
that
there
is
a
cube,
ctl
command
or
cube
cuddle,
there's
a
ongoing
email
about
what's
the
proper
way
of
calling
that
tool.
So,
let's,
let's
call
it
cube
cuddle
for
now
what
cube
cuddle
actually
does
for?
You
is
among
you
know
many
things,
not
just
one
thing,
but
the
kind
of
the
main
component
is
a
curl.
It's
a
curl
request
with
some
sort
of
http
verb,
get
post,
delete
and
so
forth.
A
As
a
result,
it's
an
api
call
so
we're
making
a
curl
request,
which
you
can
do
either
you
through
cube
cuddle
or
you
could
generate
your
own
curl
command.
If
you
know
what
the
certs
are
and
you
send
it
to
the
cube
api
server
as
kind
of
a
self-documenting
name
there,
the
cube
api
server
handles
your
api
calls
you'll
notice
when
looking
at
the
various
arrows.
A
All
of
the
api
calls
everybody
talks
to
the
api
server.
The
api
server
handles
apis
in
keeping
with
a
decoupled
transient,
microservice
concept.
All
it
does
is
handle
the
api.
So
it's
it's
not
actually
handling
and
managing
what
the
api
call
wants
to
do.
It's
really
just
arranging
and
does
three
things.
First,
are
it's
authentication?
A
Are
you
really
who
you
say?
You
are
that's
done
by
default
through
a
token
of
x509
token,
but
you
could
also
point
yourself
in
a
single
sign-in.
You
know,
and
through
a
web
hook
request.
The
second
thing
it
does
is
authorization
whatever
the
curl
request
was.
Are
you
authorized
to
do
that?
We
do
that
using
our
back
role
based
access
control.
So,
if
you
are,
you
know,
if
you
are
really
who
you
say,
you
are
and
you're
authorized
to
create
or
delete
or
look
at
whatever.
A
A
In
this
case,
let
us
say
that
I
asked
for
the
creation
of
something
called
a
deployment
so
which
is
a
default
operator
that
we
would
use.
So
I
send
a
curl
request
to
the
api
server
saying:
please
create
a
deployment
for
me.
The
api
server
then,
assuming
that
my
token
is
proper
and
then
that
rbac
says
in
this
namespace
you
are
allowed
to
do
that.
Then
we'll
communicate
with
all
of
these
other
pots.
Each
one
of
them
has
its
own
particular
purpose.
A
It's
microservice,
so
at
cd
you'll
notice,
the
only
agent
talking
to
etcd
is
the
api
server.
The
fcd
keeps
track
of
the
persistent
state
of
your
cluster.
It's
not
a
database
for
end
user
usage.
This
is
just
for
what's
going
on
with
your
cluster
and
that
information
can
be
divided
into
two
parts:
the
spec,
which
is
what
it
should
be
and
the
status
which
is
what
it
is
now,
I'm
oversimplifying
it
to
some
extent,
but
that's
what
etcd
is
keeping
track
of.
Does
it
in
an
adjacent
format,
and
so
what
should
be?
A
And
what's
the
current
situation
and
it
persists
there?
It's
also
kept
in
memory
of
the
cube
api
server
container,
that's
and
it
checks
the
cache
and
if
it
needs
to
it'll,
write
there
or
reference
the
information.
So
only
the
api
server
talks
to
ncd,
and
so
when
I
make
a
request,
I
would
like
to
create
a
new
deployment.
It
will
communicate
if
it
doesn't
already
have
it
in
its
own
memory.
A
Does
this
token
match
is
the
r
back
setting
appropriate
and
then
that
the
spec
needs
to
change
there
needs
to
be
a
deployment
now
it
that's
kind
of
where
its
job
for
now
ends
the
cube
controller
manager,
that's
at
the
the
top
part
of
the
control
plane.
That's
that's
our
brain
that
that
container
has
all
of
our
operators
in
it
and
that's
a
key
phrase
for
understanding.
Kubernetes
is
operator,
sometimes
you'll
see
them
called
controllers.
Sometimes
people
will
reference
them
as
a
watch
loop,
the
more
modern
term
for
it
is
operator.
A
It
operates
on
something
we.
The
entire
nature
of
this
orchestration
system
is
decoupled
and
transient.
It's
an
understanding
that,
whatever
I
was
talking
with,
is
going
to
go
away.
How
we
do
that
is
through
a
series
of
operators
that
are
constantly
asking
for
the
spec
what
should
be
and
then
the
current
status
if
they
match
it
just
asks
again
over
and
over
and
over
again
all
the
time.
That's
all
it
does.
What's
the
spec,
what's
the
status,
what
over
and
over,
if
the
spec
and
the
status
don't
match,
that's
where
the
operation
part
comes
in.
A
So
in
this
case,
when
I
made
a
request
for
an
operator
sorry
for
a
deployment,
then
a
moment
later,
the
cube
controller
manager
would
make
a
request.
Saying:
has
the
spec
changed?
Yes,
it
has.
There
is
supposed
to
be
a
new
deployment
called.
Let's
call
it
test,
oh
okay,
so
it
gets
the
the
spec
has
now
changed.
There
should
be
a
deployment
called
spec
or
a
test
moment
later
says:
what's
the
status?
Is
there
a
deployment
called
test?
No
there's
not.
A
I
have
something
to
operate
on
so
the
deployment
operator
running
inside
of
the
cube
controller
manager
says,
there's
a
difference
and
I
will
operate
upon
it,
create
this
deployment.
Now
then,
it
goes
back
and
forth
a
lot
of
back
and
forth
between
the
brain,
which
is
your
cube,
controller
manager
and
your
cube
api
server.
A
The
deployment
operator
actually
manages
a
different
operator
called
a
replica
set.
So
this
is
another
part
of
understanding
the
architecture
of
kubernetes
that
you
might
have
these
watch
loops
watching
other
watch
loops,
which
watch
resources
for
you.
So
instead
of
having
one
operator
a
watch
loop
that
does
everything
we
have
a
decoupled
operator.
So
you
do
your
task
and
I'll
do
mine
and
we'll
be
focused,
we've
been
updated
independently.
A
We
can
do
what
we
can
optimal
in
our
job
and
be
developed
separately,
which
is
the
same
concept
of
the
entire
cluster
and
what
we
want
our
applications
to
do
as
well,
so
the
deployment
operator
says
well.
Do
I
have
a
replica
set
same
thing,
goes
to
the
api
server?
What's
my
spec
there
do.
I
have
this
replica
set.
Well,
you
just
created
one,
and
the
status
is
no,
it
doesn't
exist.
So
a
different
operator
is
formed.
Called
a
replica
set.
The
job
of
a
replica
set
is
keeping
track
of
replicas.
Well.
A
That
operator
makes
a
request
how
many
replica
pods
do.
I
have
that
are
using
this
pod
spec,
so
they
are
replicas,
meaning
they
use
the
same
specification,
they're
running
the
same
image,
they're
using
the
same
components
and
how
many
of
them
do
I
have
so,
if
I
haven't
told
it
otherwise,
the
replica
count
would
be
one
so
the
replica
set
says
my
spec
says
I
should
have
a
pod
one
pod.
How
many
do
I
have
it?
A
Does
this
request
by
a
label,
so
the
architecture
is
based
off
of
some
sort
of
operator
that
uses
a
selector
that
ties
to
labels.
That's
all
that
ties
everything
together,
yeah
there,
of
course
there's
names
and
other
components,
but
when
it
really
comes
down
to
it,
each
of
these
operators
doesn't
really
know
what
components
it
should
be
keeping
track
of,
not
from
one
call
to
the
next.
There
isn't
a
session
concept,
it's
what's
the
spec
and
what's
the
status,
and
how
does
it
know
which
I'm
talking
about
that?
A
I
have
a
selector
that
matches
a
label.
So
in
this
case
the
cube
controller
manager,
replica
set
operator
says
how
many
match
this
particular
label.
You
know
test
app
is
test.
None.
I
haven't.
You
know
you
have
zero.
Okay,
I
will
operate
on
that
information
and
so
back
and
forth.
All
of
this
is
happening
between
the
cube
controller
manager
and
your
api
server.
All
of
that
logic,
all
of
that
comparison
is
happening
just
there.
We
haven't
even
gone
to
our
workers,
yet
I
need
to
create
a
pod.
A
So
a
pod
spec
is
sent
to
the
cube
api
server
saying
there
should
be
a
pod
running
this
image
with
what
of
with
other
default
parameters
that
the
operator
has
sent.
The
pod
spec
goes
to
the
api
server
and,
of
course,
what
does
it
do?
Authentication,
authorization
and
admission
control.
Now
I
have
a
pod
spec.
I
need
to
send
that
somewhere
to
to
run
and
then
who
do?
I
ask
cube
scheduler
cube
scheduler,
that's
the
next
pod
running
on
that
cp
node.
It's
job
again
singular
job,
it's
a
microservice.
A
What
does
it
do?
It's
schedule
stuff?
It
is
getting
information
about
the
available
nodes
and
their
condition.
You
know
what
size
are
they
maybe
and
schedules
very
flexible.
You
can
have
multiple
schedulers,
so
there's
a
wide
range
of
flexibility
here.
Is
there
a
taint
or
toleration?
I
should
be
aware
of
it's
looking
at
all
of
this
information.
But
what
really
comes
back
from
cube
scheduler
to
the
api
server
is
just
use.
A
This
node
use
node
2.,
so
it
does
all
of
the
logic
as
far
as
what's
optimal
according
to
the
algorithm
of
the
scheduler
predicate,
there's
one
part
of
it.
Where
it
takes
away
notes
from
the
possible
list
and
then
priorities
of
the
remaining
nodes
that
are
still
in
my
list,
which
one
is
best,
the
scheduler
returns
to
the
api
server
and
says
I
choose
worker
number
two,
whatever
the
case
may
be
at
that
point,
the
cube
api
server
will
again
doesn't
really
do
anything
but
handle
those
api
calls.
A
So
it
will
persist
some
of
that
information
to
ncd,
saying
it's
supposed
to
be
running
on
worker
2.
and
then
to
cubelet.
So
let's
go
with
that.
Middleworker
will
be
worker
2..
It
sends
it
to
cubelet
cubelet,
which
runs
on
every
node
is
what
actually
starts
your
containers
doesn't.
Do
it
directly.
Cubelet
is
a
systemd
service.
So
it's
the
one
thing
here:
that's
not
a
pod!
It's
what
starts
all
the
pots.
So
cubelet
gets
the
pod
spec.
A
It's
cubelet's
job
to
talk
to
your
container
engine,
whatever
that
container
engine
may
be,
and
that's
just
it
we
don't.
The
cluster
doesn't
really
care
what
the
actual
engine
is.
So
it
could
be
docker.
It
could
be
cryo
container
d,
frock
d,
yaki
lots
of
options
out
there
and
we
don't
orchestrate
and
insist
on
any
one
of
them
as
long
as
kublet
on
that
particular
node
knows
how
to
talk
to
the
engine
and
tell
it
what
to
do.
Then,
then
it's
happy
we're
all
happy.
A
That's
handling
the
network
side
of
things.
So
if
there
is
anything
having
to
do
with
the
network
being
configured,
that
actually
happens
on
all
nodes,
which
is
why
you
can
talk
to
any
worker,
any
note
really
and
still
get
to
the
pod,
even
if
it's
not
where
the
pod
lives.
So
we
have
that
flexibility.
Everybody
gets
these
rules.
We
have
a
network
plug-in
running
that
helps
that
communication
as
well,
so
one
cubelet
gets
the
pod
spec.
All
of
the
proxies
would
get
any
necessary
information
and
and
arrange
your
ip
tables
for
that
layer
of
communication.
A
So
going
back
to
cubelin
accepts
the
pod
spec
and
it
says
well,
what
do
I
need
if
there's
a
volume
that
is
listed,
cubelet
is
who
talks
to
the
kernel
to
get
that
volume
mounted,
and
this
can
be
important
when
we
start
talking
about
access
to
our
vibes.
It's
important
to
understand
the
container
does
not
do
the
mounting
it's
cubelet
that
does
it,
and
that
happens
before
the
container
is
even
started.
So
it
mounts
it
talking
to
the
local
kernel
and
then
makes
a
symbolic
link
available
to
wherever
the
container
will
end
up
being.
A
If
you
have
these
things
called
secrets
or
config
maps,
this
is
another
part
of
the
decoupling
of
our
environment.
We
want
to
have
the
smallest
image
possible
with
any
kind
of
parameter
or
value
or
file
that
might
change.
We
want
that
to
be
decoupled
and
separate,
so
we
can
do
that
in
a
way
called
a
secret
which
would
be
encoded
or
encrypted,
or
neither
encoded
nor
encrypted,
but
more
flexible
would
be
a
config
map.
A
So
it's
the
kubelet's
job
to
request
all
of
this
information,
so
mount
resources
download
any
secrets
work
with
any
of
this
when
it
has
the
resources
that
were
in
the
pod
spec,
whatever
that
may
or
may
not
be
when.
A
Then
the
pod
leaves
its
pending
state
and
cubelet
tells
docker
go
ahead
and
start
these
containers.
One
of
the
things
that
happens
is
there's
actually
a
pause
container
started
first
that
holds
the
ip
address.
So
your
pod,
your
your
containers
do
not
even
know
what
their
ip
will
be.
It's
an
ephemeral
ip
and
they
don't
know
what
it
is
until
they're
started.
We
don't
have
a
inside
of
the
pod
networking,
so
some
people
who
are
used
to
docker
kind
of
assume
there
must
be
another
layer
going
on
whether
you're
using
docker
or
cryo.
A
It's
a
sign
and
you
have
one
ip
per
pod.
This
is
probably
a
good
time
to
talk.
What's
this
pod
that
you
keep
talking
about
tim
well,
what
we
actually
orchestrate
in
our
environment
are
pods
a
pod
is
one
or
more
containers
that
have
a
single
ip
address.
They
share
a
network
namespace
and
they
have
equal
potential
access
to
storage.
That's
what
we
actually
orchestrate
by
pods
via
the
pod
spec.
The
running
of
the
container
is
not
something
that
kubernetes
actually
pays
attention
to.
A
It
just
talks
to
the
engine,
which
should
do
that
for
you,
which
could
be
docker
or
cryo
container
d
and
so
forth.
Docker
was
the
default.
If
you
use
cube
adm,
it
would
still
be
the
most
typical
and
probably
easiest
way
to
do
it,
but
be
aware
that
now
that
docker
is
kind
of
got
pulled
in
to
marantis,
really
isn't
docker
anymore,
that
the
community
is
definitely
moving
towards
other
options,
container
d
or
cryo.
You
know
red
hat
uses
cryo
already,
so
there's
a
lot
of
of
people
using
it.
A
In
that
sense,
container
d
is
pretty
straightforward.
To
use
you
can
do
other
stuff,
so
the
engine
decision
from
a
cluster
admin
perspective
might
be
something
that
you
want
to
sit
down
and
and
have
conversations
about
when
it
comes
down
to
it.
As
far
as
kubernetes
is
concerned,
it's
compliant
engine
runs
a
compliant
image.
I
don't
really
care,
nobody
would
know,
and
that's
just
it.
Nobody
would
know
what
the
engine
is.
If
you're
running
a
compliant
engine,
so
hey
this,
my
life
is
much
easier.
A
Then
I
might
want
to
have
a
feature
that
this
or
that
engine
does
for
me.
For
example,
container
d
allows
me
to
run
g
visor
very
easily.
It's
easy
to
get
it
up
and
running.
Gvisor
gives
me
some
security,
so
that
might
be
a
reason
to
go
with
container
d.
Cryo
is
something
that's
used
in
in
red
hat,
so
there's
a
large
install
base.
It's
well
known,
well
understood
in
that
net
realm,
so
you
have
choices,
but
when
it
comes
down
to
it,
a
compliant
engine
runs
a
compliant
image
and
nobody
knows
the
difference.
A
It
just
runs
so
cubelet's
responsible
on
whatever
that
worker
is
and
your
worker
by
the
way
could
even
be
a
windows
server,
because
the
overall
cluster
is
like
well,
I
talked
to
cubelet.
I
sent
the
pots
back
to
cubelet,
it's
cubelet's
job
to
talk
to
whomever
or
whatever
that
engine
may
be.
So
at
this
point,
cubelet
has
all
of
the
resources
that
it
needs
and
it
communicates
to
docker
or
cryo
or
container
d.
Whoever
it
is
says,
okay,
start
that
here's
your
ip
address,
here's
your
other
parameters,
start
that
container
for
me.
So
that's
it!
A
We
now
have
our
running
replica.
How
our
system
does
orchestration,
then,
is
the
back
to
the
control
plane.
The
cube
controller
manager
has
those
operators
they
never
stop.
Asking
they're,
always
asking.
What's
the
spec,
what's
the
status,
what's
the
spec,
what's
the
status,
so
if
your
container
were
to
fail,
if
your
node
were
to
fail
just
go
away
on
you,
it
it
just
blips
and
somebody
pulls
the
power
cord.
Well,
they
those
watch
loops
like
do.
I
have
something
that
matches
these
labels
and
the
deployment
says.
Do
I
have
a
replica
set?
A
The
republican
says
yes,
yep
still
here,
okay,
great
replicas,
that
says:
do
I
have
a
pod?
No,
you
do
not
have
a
pod
that
has
that
label.
Oh
well,
spec.
A
Start
one
and
the
process
continues
start
a
replica
of
this
pod
for
me
goes
to
this
api
server.
Api
server
asks
the
scheduler,
of
course.
If
node
number
two
is
just
gone
now,
the
schedule
says:
well,
that's
not
a
good
choice,
you're
going
to
go
to
worker
number
three
and
this
process
will
continue
times
as
many
replicas
as
you
want
as
many
different
options
as
you
want.
So
we
can
orchestrate
and
anything
can
go
away.
We
can
add
new
nodes.
We
can
grow
our
cluster
from
one
node
to
5000
nodes.
A
We
can
scale
our
pods
from
one
to
ten.
We
can
use
the
deployment
can
deploy.
Multiple
replica
sets
for
you
and
change
from
using
the
one
version
to
the
others.
You
need
rolling,
updates
and
rollbacks.
This
kind
of
decoupled
transient
architecture
that
leverages
ongoing
operators
or
watch
loops,
always
asking
always
checking
means
that
we're
expecting
something
to
change
and
we
operate
around
it.
So
that's
why
everybody
really
likes
one
of
many
reasons
to
like
or
love
kubernetes.
B
I
mean
that
was
not
a
quick
run
through.
That
was
a
very
detailed
run
through
for
the
architecture
and
the
components
and
how
how
actually
a
person
writes,
cube
ctl
run
an
image
and
a
pod
name,
so
it
will
what
steps
it
takes
to
deploy
the
complete
to
actually
run
that
small
application
or
a
micro
service
or
a
simple
engine
export
on
the
kubernetes
system.
So
I
think
that
was
a
complete
end-to-end.
B
You
know
detailed
explanation
of
all
the
components
which
are
there
on
the
control
plane,
which
is
the
cp
and
also
on
the
worker
nodes,
where
how
the
cubelet
is
working,
how
it
interacts
with
the
container
runtime
interface,
a
csi
drivers.
If
storage
has
the
storage
has
to
be
there.
B
So
I
think
that's
that's
a
pretty
neat
introduction
to
the
architecture
by
far
the
best
one
I
have
ever
heard
to
be
honest
and
people
do
agree
with
me
to
in
the
chat,
so
I'm
not
lying,
so
people
are
agreeing
that
it
is
the
best
one.
So
by
now
those
who
are
watching
who
you
might
now
get
the
idea.
B
What
kubernetes
is
because
tim
has
explained
very
clearly
like
how
the
shift
has
happened
and
kubernetes
was
there
internally
for
a
lot
of
time
and
then
the
core
was,
you
know,
exposed
to
the
open
source.
Basically,
and
then
this
is
the
architecture
that
you
are
seeing
on
the
screen,
pretty
clear,
all
the
components
have
their
own.
You
know
own
respective
meaning
and
the
purpose
in
the
ecosystem
and
the
controller
manager,
the
brain,
the
api
server.
B
All
the
communication
happening,
the
atc
customer
state
and
your
kubelet
is
responsible
for
running
the
pause
interacting
with
the
cryo
and
the
cri
proxy
is
for
your.
You
know.
The
networking
ip
table,
pooling
and
scheduler
is
for
scheduling
the
nodes
right
fit
node
for
that
particular
workload.
B
So
I
think
that
that
pretty
much
is
covers
the
introduction
to
kubernetes
and
how
a
pod
runs
on
kubernetes,
because
these
are
the
basic
building
blocks
like
a
pod
deployment
replica
set,
and
these
are
the
components,
the
api,
server,
controller
manager,
etc.
Scheduler
cubelet
cube
proxy.
So
with
this
I
think
we
are.
You
know
we
are
now
in
a
good
state
to
to
start
exploring.
B
Basically,
if
people
want
to,
you
know,
set
up
something
set
up
a
kubernetes
cluster,
then
probably,
how
do
they
do
that
now?
This
is,
you
know,
like
I
said
before,
this
is
a
search
magic,
show
everything
ties
to
the
certification,
so
obviously
you
you
have
to
have
a
cluster
to
practice.
B
That
is
very
important,
so
this
will
not
only
help
you
to
stand
up
a
kubernetes
cluster,
but
also
it
can
be
helpful
during
the
exam,
because
you
might
have
a
question
that
where
you're
asked
like
you
know,
create
a
kubernetes
cluster
using
cube
adm,
then
how
would
you
do
that?
So,
let's,
let's
do
the
the
lab
for
creating
the
cluster
tim.
A
Okay
sounds
great,
and
in
our
courses
we
don't
we
don't
write
exam
specific
courses
just
just
to
kind
of
forewarn
everybody.
Instead,
we
try
to
make
you
the
best
admin
possible,
which
of
course
also
means
that
you'll
be
well
well
prepared
for
the
exam.
So
it's
not
that
we
we
don't
ignore
the
exam,
but
a
lot
of
times.
People
expect
a
brain
dump
like
well.
Just
tell
me
what's
on
the
exam
and
that's
not
what
we
do.
A
We
want
to
give
you
the
skills
to
go
into
a
production
environment
and
and
get
the
job
and
do
the
job,
which
is
what
certification
is
also
about.
So
I
always
like
to
to
preface
that
when
people
say
well,
is
this
exactly
what
I'll
see
on
the
exam?
It's
all
the
topics,
all
you're
working
with
the
tools
that
you
will
need,
but
it's
not
an
exam
specific
thing.
So,
the
way
our
our
labs
are
written,
I
write
them
to
be
as
flexible
as
possible.
A
We
use
a
two
node
cluster
and
that's
to
expose
you
to
networking
issues
and
evacuation
from
one
node
to
another.
You
could
run
kubernetes
other
ways,
it's
very
flexible.
There's
60
or
seven
conformant
software
clusters
out
there,
so
you
have
options,
but
we
try
to
expose
you
not
just
to
this
would
work
for
the
exam.
But
what
am
I
going
to
see
when
I
get
it
to
the
job?
What
is
it
that
my
cluster
is
going
to
look
like,
so
we
use
a
two
node
cluster
and
we
use
cube
adm
to
build
it.
A
I've
written
the
labs
so
that
you
could
use
virtualbox
vmware
two
spare
laptops
are
sitting
around.
You
can
use
google
amazon
digitalocean
many
options
because
it's
just
two
instances:
the
only
provider
that
tends
to
have
headaches
and
and
we
we
tend
to
just
warn
people
just
so
you
know,
is
as
azure
that
have
they
have
their
own
some
networking
things
that
are
kind
of
interesting
there
and
they
tend
not
to
run,
but
it
runs
everywhere
else
with
just
two
instances.
So
in
this
case
I'm
you
would
leverage.
A
You
I'm
used
to
sharing
my
entire
screen
and
not
just
the
thank
you
for
letting
me
know.
So,
let's
share
that
window.
Real
quick.
B
And
also
very
very,
very
good
point
said
by
tim
that
even
the
search
magic
show
or
anything
that
is
there
any
training
material
that
that
cncf
has
produced
is
basically
for
making
you
enable
to
do
actual
tasks
at
your
place
at
your
workplace
and
in
the
last
episode
I
discussed
exactly
the
same
things
like
why
certifications
are
important,
because
the
learning
journey
will
prepare
you
for
your
jobs
at
your
work
and
everything.
A
Absolutely
are
you
seeing
the
google
screen
now?
Okay,
great,
so
I
I've
just
set
up
two
nodes
to
be
ready
for
the
lab
one.
I
called
cp,
the
other
one.
I
call
worker
just
so
you
know
exactly
what
I
did
is.
I
went
to
create
instance,
and
I
know
this
is
going
to
be
really
slow
now
that
I'm
trying
to
do
it
for
everybody
else.
But
the
point
is
that
you
set
up
two
instances:
the
big
heavy
lift
that
most
people
get
stuck
with
is
the
networking
side.
A
A
So
at
least
two
processors,
eight
gig
we're
going
to
change
it
at
the
moment,
we're
still
running
in
ubuntu
1804,
because
that's
what
the
exam
uses
the
2004
is
going
to
be
coming
soon
and
as
soon
as
the
exam
team
updates,
then,
hopefully,
within
a
week
I'll
get
my
stuff
up
and
running
and
match
whatever
the
exam
environment
is
so
then
the
hard
part
that
most
people
get
stuck
at
is
down
here.
Talking
about
networking,
we
don't
want
anything
between
our
two
nodes
and
in
most
environments,
whether
it's
virtualbox.
A
That's
not
really
that
open.
You
actually
have
to
turn
it
to
promiscuous
mode
with
vmware.
All
of
these
kvm
key
move
whatever
it
is,
make
sure
that
your
two
nodes
have
nothing
blocking
traffic
between
them
later
once
you
have
it
working,
that's
when
you
go
back
in
and
start
adding
firewall
rules,
but
for
now
let's
make
it
completely
open.
So
you
go
to
networking
and
you
can
change
it
in
this
case
I
have
a
network,
that's
called
four
class
and
it,
if
you
dig
into
what
it
is,
there's
nothing
blocked.
Everything
is
open,
entirely
open.
A
So,
there's
nothing
between
our
notes.
That's
usually
the
hard
part.
That's
setting
up
your
environment,
virtualbox
people
don't
realize
that
it
still
doesn't
allow
all
traffic
kvm
kimu
and
your
ovs
switch
may
not
allow
all
traffic
so
make
sure
nothing
between
your
notes.
That's
the
hard
part
about
this,
and
then
you
create
it
and
what
you
end
up
with
is
in
this
case
I
have
a
node
that
will
be
my
control
plane
and
another
node.
That
will
be
my
worker.
Amazon
has
the
same
sort
of
thing.
A
So
here
it's
called
a
vpc
amazon
is
I'm
blanking
on
what
they
call
it,
but
same
concept
make
sure
you
go
into
the
network
tab
and
allow
all
traffic?
Not
just
this,
not
just
that
like.
Oh
I'm
sure
this
is
all
I
need
all
traffic
worry
about
it
once
you
have
it
working
to
tighten
it
and
lock
it
down.
So
when
you
end
up
with
it,
then
you
end
up
with
a
access
to
your
notes,
and
let
me
share
that
screen.
So
stop
share.
Share,
share
screen
application
window.
A
Okay,
so
I
have
an
application
window
here.
I'm
just
using
a
tool
called
the
terminator,
hopefully
you're,
seeing
two
different
terminals.
This
allows
me
to
go
back
and
forth
on
the
top.
I
have
a
my
I've
logged
into
my
control
plane
on
the
bottom
I've
logged
into
my
worker
and
so
far
I
I
haven't
done
really
anything
at
this
point.
A
So
what
I
want
to
do
at
this
point
is
to
get
my
system
installed
and
up
to
date,
so
I'm
going
to
go
ahead
and
become
root,
and
then
I'm
going
to
update
and
upgrade
my
environment
just
to
make
sure
that
it's
current,
so
I'm
going
to
focus
on
the
control
plane
for
the
moment.
So
I'm
going
to
zoom
into
that.
So
you
can
just
see
that,
as
as
it
runs,
and
as
it
goes
by
so
you
depend,
1804
is
is
getting
a
little
old.
A
So
you
might
abby
ask
some
questions
about
okay
during
the
update.
Do
you
want
to
allow
restart?
Do
you
want
to
use
the
local
version
and
you
might
be
asked
a
time
and
date,
questions
if
you
ever
install
cryo
instead,
and
so
in
this
case?
Hopefully
it
asks
me
these
questions
shortly,
but
as
it
as
it's
installing,
where
we're
going
with,
is
we
we
get
the
os
up
to
date.
We
add
a
repository
to
get
to
the
software.
A
Then
we
install
the
software
and
use
the
cube
adm
init
command,
so
that
that,
in
this
case,
didn't
ask
me
any
questions,
but
it
might
so
if
it
does
allow
the
reboot
and
then
keep
the
local
version
of
there.
Now,
in
this
case,
you
might
want
to.
If
you
don't,
have
a
a
editor,
you
might
want
to
install
one
like
them.
Emacs
nano!
Don't
really
matter,
just
make
sure
that
you
actually
have
that
bit
of
information.
A
Now,
in
this
case,
I
could
install
docker
app,
get
installed,
docker
io
or
if
you
want,
you
could
go
and
install
cryo
instead,
since
cryo
is
a
little
bit
more
complicated.
Why
don't
we
try
to
do
that
here?
So
you
can
see
it
so
either
you
would
do
an
app
get,
install
docker,
io
here
and
then
go
or
here's
10
steps
for
getting
your
cryo
to
work.
This
is
some
of
the
things
to
get
cryo
to
work
and
container
dies.
A
A
little
easier,
so
I
actually
chose
the
hard
one,
because
if
you
can
get
the
hard
one
to
work,
the
other
one
should
be
a
little
easier,
so
in
this
case
mod
probe
of
an
overlay
and
a
br
net
filter,
and
then
I
want
to
make
sure
that
this
is
also
persistent.
So
I'm
going
to
edit
a
cis
ctl
file
for
keep
christ,
we
see
etsy,
says
ctl.d
99
kubernetes,
so
it
runs
last
and
inside
of
this
I'm
going
to
make
sure
that
the
bridge.
A
A
Of
course,
I
want
to
make
sure
I
didn't
mess
that
up
so
sys
ctl
system,
and
you
should
see
at
the
bottom
there
that
it's
applying
those
changes
among
everything
else
that
you
may
have
done
now.
In
this
case,
we
now
use
the
open,
suse
versions
of
software,
so
just
to
make
life
a
little
easier.
I'm
going
to
set
so
I
did.
A
An
export
of
the
operating
system
is
and
moon
to
1804,
and
you
know
that
will
change
depending
on
what
version
you're
using
and
then
what
version
of
kubernetes
or
cryo
that
you're
planning
on
using
the
cryo
gets
updated
in
accordance.
You
know
a
little
bit
behind
when
kubernetes
comes
out.
You
get
a
version
for
that,
so
I'm
going
to
use
an
echo
command
and
I'm
going
to
create
an
app
sources
list
for
the
open,
souza
repository.
A
So
this
is
what's
going
to
go
into
your
file,
deb
download,
opensuse.org
repositories,
development
cubic
lib
container,
stable
cryo,
and
then
I
passed
it
version
in
os.
You
see
what
actually
got
put
in.
There
was
1.20
x
and
boon
to
1804.
So
for
your
versions,
you
can
always
go
to
download
opensuseorg.org
and
explore
it.
So
if
it
changes,
you
can't
find
it
go
there
and
you
should
be
able
to
find
those
resources.
A
As
you
look
now,
of
course,
we
want
to
be
able
to
actually
use
that
software,
so
we
can
load
the
keys
to
it,
and
this
is
also,
if
you
go
to
the
cryo
page
cryo.io.
This
is
documented
on
that
page.
So,
if,
if
I'm
talking
too
fast,
you
can't
quite
see
the
gui
cryo,
the
main
page
for
cryo,
the
mboon2
install,
has
all
of
this
information
in
it.
A
So
I've
added
a
repository
this
to
this
time
and
I
added
the
key
now
my
second
repository
for
the
lib
containers
and
well,
it's
a
issue
with
backspace
in
my
example,
so
I'm
gonna
create
a
same
thing:
opensuse
repositories,
devel
lib
container
stable
for
whatever
my
os
is,
and
it
ends
up
being,
of
course
x
and
ubuntu
1804,
and
I
have
a
key
for
that
repository
as
well,
so
just
to
show
you
as
a
history
of
what
I've
done
so
far,
so
you
can
kind
of
see
it
all
together.
A
Of
course,
there's
there's
one
typo
in
there,
but
otherwise
I've
just
updated
the
system,
and
I've
made
sure
that
I
can
get
to
my
cryo
software
as
it
gets
as
it's
available.
Now
that
I've
done
that,
I
need
to
let
apps
know
that
there's
a
new
version,
and
so
it
should
be
pulling,
and
I
should
see
that
it's
successfully
pulling
from
cryo
and
lib
containers.
A
That's
a
way
of
double
checking
you
didn't
typo
like
I
did
in
the
previous
example
then
now
that
like
it
appears
to
have
worked,
let's
install
the
packages
so
we're
going
to
install
cryo
and
cryo
run
c.
The
run
c
version
there's
a
little
bit
of
disconnects
between
the
versions,
so
you
could
use
the
emoji
one,
but
it's
not
always
perfect.
So
I
want
the
cryo
version
of
that
software.
Then
it
should
be
installed
here,
pretty
quick
and
we
want
to
make
sure
it
actually
is
running.
A
So
I'm
going
to
do
a
system
ctl
daemon
reload,
I'm
going
to
just
make
sure
that
cryo
is
enabled
and
then
start
it
and
take
a
look
at
it
and
hopefully,
if
my
luck
holds
it
will
say,
when
I
look
at
the
status
of
it,
it
will
say
it
is
active
and
running
and
you
can
look
through
to
see
if
there's
anything
odd
here
you
get
some.
You
might
see
some
errors
with
validating
such
and
such
at
this
point.
It's
not
a
big
problem.
It's
a
warning
of
an
error.
A
So
at
this
point,
things
are
looking
good
and-
and
I
can
continue
to
the
next
step,
so
those
steps
I
just
did
so.
If
I'm,
let's
look
at
my
history
again
from
step
two
to
step,
18
is
to
get
cryo
running.
A
All
those
steps
could
be
replaced
with
app
get
install
docker,
dot,
io,
okay,
so
that
that,
just
to
kind
of
give
you
an
understanding
if
you
chose
the
docker
route,
you
could
replace
that
with
docker
or
in
this
case
the
harder
more
cut,
not
really
harder,
but
more
steps
would
be
to
get
cryo
running
now,
we're
back
to
both
no
matter
what
your
engine
is.
This
is
the
process.
We
need
to
add
the
repository
to
get
access
to
kubernetes
software
now,
so
I'm
going
to
add
into
a
another
sources
list
file.
A
That
is
a
debian
package
and
it
has
this
parameters
here,
so
app,
kubernetes,
io
zenyal
still
and
then
it's
a
little
bit
behind
there
and
then
main
and
that's
just
the
syntax
for
that
repository
and
we
have
another
key
that
we
want
to
make
sure
is,
is
in
our
environment,
so
we're
going
to
curl
and
refine
this
key
here:
okay,
curl
from
that
packages,
google
and
pipe
that,
to
an
app
add
to
add
our
key
to
our
environment.
It
says:
okay,
that's
good,
and
then
we
do
another
apt-get
update.
A
So
at
this
point
we
should
be
able
to
get
access
to
our
kubernetes
software
and
let's
go
ahead
and
install
it.
So
if
you,
you
saw
app
get
installed
now
that
the
repositories
work
and
we
want
to
install
three
different
packages,
cube
adm,
cubelet
and
cube
cuddle,
so
the
versioning
of
it
depends
on
what
you
want
to
use
in
this
case.
At
the
end
of
the
package
names,
I've
put
a
particular
version.
So
if
you
leave
that
off
you'll
get
the
newest
version,
so
it's
at
the
at
the
moment.
A
It's
1.21.2
unless
stop
three
dropped,
but
that's
just
it
updates
happen.
Major
updates
happen.
Every
three
months,
minor
updates
happen
every
seven
to
ten
days,
so
just
be
aware
that
there's,
the
one
thing
constant
is
change.
So
in
this
case
I'd
like
to
know
exactly
what
the
version
is
which
matches
the
exam
at
the
moment.
So
that's
just
something
to
be
aware
of
that,
since
there's
so
much
change.
If
you're
not
paying
attention
and
you
install
the
different
version,
there
might
be
differences
in
the
api.
A
There
might
be
subtle
differences
in
commands
and
then,
when
you're
in
the
exam
environment,
you're
like
whoa.
What's
this,
this
isn't
working
the
way
I
expect
it.
So
you
always
want
to
check,
go
to
just
to
kind
of
to
call
it
out
here,
go
to
cncf.io
certification,
cka,
scroll
down
and
use
curriculum
overview
and
the
handbook
and
verify
the
version.
A
You'll
also
get
that
verification
when
you
sign
up
for
the
exam
make
sure
that
whatever
you're
using
matches
what
that
is
now
in
this
case,
I
might
be
using,
let's
say,
1.20,
because
I
want
to
practice
updating
my
kernel
so
I'll
I'll
install
one
version
previous
and
then
I'll
upgrade
my
kernel
and
that
way
I
get
to
practice
that
as
well
and
I'll
see
what
a
full
upgrade
of
major
version
looks
like.
So
I'm
going
to
go
ahead
and
hit
enter
and
install
this
software.
A
Now,
because
it's
you
know,
I
might
be
in
an
active
environment
and
other
people
are
installing
software
and
doing
stuff.
I
don't
want
to
accidentally
get
into
a
mismatch
where
I've
initialized
a
kernel
with
a
particular
cluster
with
a
particular
version,
and
then
I
end
up
with
something
different
a
day
later,
when
somebody
does
an
upgrade,
so
I'm
going
to
go
ahead
and
hold
cubelet,
qbdm
and
cubectl.
I
know
where
it
is,
so
somebody
has
to
unhold
that
before
they're
able
to
to
update
it.
A
So
if
you
know
their
who
knows
what
they're
updating,
if
it
happens
where
somebody
just
runs
the
command
and
you
get
a
interesting
end
result.
So
this
way
we're
locked
at
this
version
until
we
go
out
of
our
way
now
I
would
probably
choose
what
is
the
network
plugin
that
I
want
to
use
and
I'd
start
taking
a
look
at
it.
I
should
know
what
my
network
plug-in
is
before
I
initialize
my
kernel.
You
can
change
just
about
anything.
It
just
might
be
very
difficult
to
do
that.
A
So
what
you
would
do,
then,
is
in
this
case
I'm
getting
from
project
calico.
I
chose
calico
as
a
features
it's
fairly
straightforward
to
use.
It
is,
I
think,
it's
a
good
choice.
Also,
the
exam
environment
has
some
options
that
use
calico
and
we
have
a
yaml
file.
Now,
if
we
take
a
quick
look
at
that,
get
that
yaml
file
we'll
we'll
use
this
after
our
cluster
is
initialized,
but
one
of
the
big
problems
people
have
when
they
set
up
their
own
lab
environments
is
the
ip
ranges.
A
A
Ipv4,
oh
before
I
enter
next,
oh
there's
a
lot
of
them.
Okay,
so
we
see
that
the
default
pool
is
192
168..
A
So
if
your
vms
are
like
in
the
virtualbox,
it's
if
you
also
chose
192.168
you're,
going
to
have
lots
of
problems
because
routing
won't
work
so
make
sure
that
what
I
would
suggest
the
easiest
thing
to
do
is
change
your
vm
network
change
it
to
something,
not
192.168.,
but
this
is
probably
the
most
common
issue.
Is
that
it
doesn't?
They
don't
understand,
what's
happening,
and
so
they
chose
an
easy
one,
192
168
and
then
there's
contention
and
weird
stuff
happens.
So
easiest,
I
would
say,
is
choose
a
different
network
range
for
your
vms.
A
It's
one
of
the
big
issues,
but
at
this
point
then
we
would
find
out
what
our
our
primary
name
is.
So
my
host
name
dash,
I,
for
example,
this
guy's
ip
address
of
my
vm-
is
a
10
dot.
So
it's
a
10.128
there's
no
contention
there
between
them,
and
so
now
I
can
use
it
if
I
eventually
want
to
do
high
availability,
I
don't
want
my
initialization
script
to
be
tied
to
the
ip.
A
So
I'd
like
to
use
a
name.
It
gives
me
a
little
more
flexibility
instead,
so
I'm
going
to
take
that
I'm
going
to
add
an
alias
to
it.
So
I'm
just
going
to
copy
this
and
I'm
going
to
edit
my
etsy
host
file
and
I'm
going
to
insert
that
and
we'll
give
it
a
very
original
name
of
hate,
cp,
okay
control
panel,
but
whatever
you
want
to
call
it.
The
advantage
of
this
is
if
I
bought.
A
If
I
generate
or
initialize
my
cluster
tied
to
ktcp,
then
if
I
use
a
exterior
firewall
to
multi-master
it,
those
certificates
will
still
line
up.
So
that's
just
one
of
those
like
looking
forward
things
now.
There
is
a
config
file
that
you
can
use
in
order
to.
If
you
choose
docker,
it
looks
like
one
thing:
if
you're
choosing
cryo,
you
might
need
something
else.
A
Let's
so
I
have
one
where
find
dollar
sign
home
minus.
A
Okay,
so
let's
do
a
cast
of
that
file,
real
quick
and
we
can
see
so
you
guys
can
see
what
it
looks
like
that
inside
of
this
cube,
adm
cluster
configuration
a
particular
version:
kcp,
that's
the
alias.
I
used
port
6443
and
then
this
pod
subnet
matches
calico.
They
match
each
other,
so
they'll
be
set
up
the
same
way
if
you're
using
cryo,
which
is
what
we're
using.
Then
there
is
a
there's,
a
much
bigger
configuration
file.
So
let's
do
a
find
for
that.
A
So-
and
you
can
you
can
find
this-
you
can
search
for
a
cryo
config
file,
but
what
you
end
up
with
is
all
the
settings
cryo
needs
to
know
like.
What's
my
node
registration?
What's
the
name
of
it,
kate
cp,
any
other
parameters,
certificate
directories,
ip
ranges,
version
of
kubernetes,
dns
information,
again
192.168.
A
I
use
this
just
so
you
can
see
all
of
the
things
that
you
could
use
when
you
set
up
your
particular
cluster,
so
I'm
going
to
go
ahead
and
let's
go
ahead
and
just
copy
that
so
cp
that
file
to
the
current
directory,
and
so
I
have
now
that
the
cubidium
cube,
adm
and
knit-
and
I
tell
it
the
config
config
to
use-
is
going
to
be
the
medium
cryo
all
those
parameters
and
again
you
don't
necessarily
have
to
pass
all
of
them.
A
But
I
wanted
you
to
see
any
one
of
these
could
change
and
might
be
necessary
in
your
environment,
upload
search
just
for
to
provide
for
later
use
of
other
masters,
and
I
want
to
keep
track
of
this
output.
So
this
is
going
to
be
a
cp.out
file,
for
example,
because
there's
a
join
statement
in
there
that
might
go
by
and
I
might
not
see
it.
So
if
I
didn't
typo
along
the
way,
let's
see
if
this
works
okay,
so
it
says
it's
using
the
version.
A
I
expected
it
and
it's
doing
some
checks
and
it's
actually
pulling
down
the
images
for
those
containers.
I
mentioned
during
the
architectural
review,
such
as
api
servers,
scheduler
at
cd,
cube
cloud,
conv,
cube,
config
manager
are
all
being
pulled
down
and
then
it's
it's
gonna
start
them
and
again
hopefully
I
didn't
typo
something
in
my
example
to
you
guys
when
you're
doing
stuff
live.
That
always
happens.
You
know
it's
something
goes
sideways,
but
but
so
it's
not
it
so
far.
A
You
know
this
is
usually
where,
if
I've
typoed
that
file,
this
is
usually
where
it
has
a
problem.
Because
now
it's
trying
to
use
these
various
containers,
the
the
cubelet
we
talked
about
cubelet
as
a
systemctl
service
cubelet
is
actually
starting.
Those
containers
for
you
in
this
case
we
got
lucky
this
time
and
and
lo
and
behold
it
says,
hey
it
worked
it
initialized
successfully
to
start
using
your
cluster.
You
need
to
run
this.
So
your
cl,
you
don't.
A
Cube
cuddle
command
doesn't
know
where
to
go.
You
have
to
tell
it
so
you
could
give
full
admin
capability,
so
I'm
going
to
do
an
exit
statement,
but
I'm
going
to
copy
and
paste
what
they
tell
me
to
do
so
copy
this
stuff
over
to
your
local
directory,
so
that
you
can
actually
use
it.
You
can
also
do
this
export,
which
is
calls
it,
but
in
this
case
now
it's
persistent
and
when
I
run
a
command
so
cube
ctl
get
node.
It
says
that
not
ready
control
plane.
Remember.
I
said
it
uses
some
other
names.
A
That's
we're
shifting
away
from
that
to
control
plane
42
seconds,
so
the
containers
are
still
starting.
Stuff
is
still
happening
and
we
see
a
join
command,
but
it
also
tells
us
right
here:
hey,
don't
forget
to
start
your
network
plug-in.
So
if
you
have
weave
q,
router
romana
flannel,
there's
options
we
chose
to
use
deep
cpl
will
be
a
create
or
apply.
Let's
go
with
what
it
tells
me
to
do.
F,
calico,
ammo,.
A
Oh
because
I
whoops
sudo
cp,
some
rat
roots
directory,
I
downloaded
over.
A
There,
okay,
now
I
can
do
it.
I
forgot
to
move
the
file
over
that's
all
and
created,
and
it's
now
working
so
the
network
plug-in
that
allows
me
to
talk
to
any
worker
and
get
where
I'm
going
is
now
installed.
Cube,
tpl
get
pod
dash,
dash
all
dash
name
spaces,
and
some
of
them
are
pending.
Some
of
them
are
running
and
now
that
I'm
this
far,
let
me
go
ahead
and
join
so
I'm
okay,
I'm
on
my
worker
here.
A
A
Just
get
it
ready
and
ins,
and
so
when
you're
ready,
you
get
the
worker
to
that
point,
it's
installed
cryo's
running
the
software.
Is
there
you've
installed
your
the
you've
installed
your
cube,
adm,
cubelet
and
cubectl
tools?
Then
you
use
this
join
statement
which
was
at
the
so
cubidium
join
and
it
has
a
generated
token
and
a
hash
and
that
that
where
the
worker
then
will
join
the
master
and
you'll
have
two
nodes,
you
can
keep
joining
workers
be
aware,
there's
a
time
limit
on
that.
A
So
if
you
go
tomorrow,
you
try
to
add
a
worker.
You
have
to
regenerate
that
those
some
of
those
tokens
and
hat
and
the
hash
actually
stays
the
same.
But
the
token
changes
it's
I
believe,
24
hours
is
the
default,
but
now
that
it's
been
a
bit,
let's
see
what
happens.
Qctl
gets
node
now
and
it
says
ready.
Qtpl,
getpod.
A
B
Yeah
we
can
join
the
worker.
A
Okay,
okay,
so
we're
gonna
join
the
worker,
okay,
so
just
just
to
to
to
make
my
life
a
little
easier.
Let's
do
it
this
way
and.
A
And
so
again,
let's
join
the
worker
so
again
and
the
roots
history
so
sudo,
I
and
since
since
I
have
it
here,
what
I
can
do
is
actually
sort
of
like
this.
I
can
do
overlay.
B
A
A
Information:
okay,
there
it
is,
and
so
that's
my
cra.com
information,
okay
and
then
the
system
version,
the
version
1.20.
I
don't
see
os
listed
there.
That's
interesting
that
didn't
get
saved
in
my
my
history
for
some
reason.
So,
let's
let's
go
ahead
and
make
sure
we
have
that
so
os
is
set
version
is
set
and
then,
let's
start
adding
the
stuff.
So
we
added
that
we
add
the
key
for
it.
A
And
it's
interesting
that
the
history
isn't
necessarily
showing
everything,
so
this
is
always
fun.
So,
let's
add
the
key
for
lib
containers:
okay,
that's
there,
and
so
we
have
the
various
bits
of
information.
I've
done
two
keys,
and
let's
test
that
I
didn't
typo
something
apt
get
update
and
I
see
the
cryo
is
listed
and
the
lib
containers
is
listed
and
it
appears
to
actually
be
working
then
install
a
cryo
and
run
c.
B
B
B
Yeah
no,
but
that
that
was
really
interesting,
because
crowd
is
by
far
the
most
would
say,
would
take
more
number
of
steps
than
the
other
ones
out
there,
whether
it's
container
d
or
whether
it's
stalker.
So
I
think
that
was
a
very
neat
and
very
you
know
a
very
good
demo
based
on
cryo.
So
basically,
whoever
are
watching.
You
now
have
a
complete
installation
steps,
so
you
can
actually
set
up.
You
just
need
two
compute
nodes,
get
them
from
anywhere.
Just
have
two
compute
nodes,
and
you
can.
B
You
know
then
install
all
these
stuff
that
are
required
to
set
up
kubernetes,
starting
from
your
take
it
ubuntu
one,
because
that
that
again
resonates
with
the
exam.
So
let's
go
with
the
ubuntu
18.04
for
now
and
then
you
can
install
all
the
components,
cube,
adm
and
then
cryo
or
then
cat
put
the
calico
yaml
file
also
have
your
cube
config
a
cubed
m
config
yaml
file
and
hold
your
cubed
m,
cube,
cdl
and
cube
lit
so
that
it?
You
know
it
prevents
the
upgrades
automatically.
B
So
anybody
who
has
to
update
they
should
unhold
first
and
then
do
the
upgrades.
It
is
very
important
and
yeah,
then
you
do
just
the
cube
adm
in
it,
which
is
the
magic
command
and
that
will
set
up
the
cluster.
Give
you
some
commands
to
run
on
the
master.
Then
you
have
to
set
up
the
networking,
because
without
that
now
networking
components
again
are
separate
from
the
cube
adm.
B
So
you
can
choose
as
per
your
choice
and
then
you
will
be
having
your
join
token,
which
will
be
your
token
to
basically
a
full
join
command,
not
only
the
token,
so
a
full
join
command
that
you
can
directly
run
on
the
worker
nodes
and
on
the
worker
nodes.
Also
till
the
qadm
and
the
you
know
that
that
particular
setup
is
required
because
you
have
to
run
the
cube.
B
Adm
join
command,
so
you
need
all
those
things
already
set
up
on
the
worker
nodes
as
well,
and
that's
what
tim
is
tim
has
been
doing,
setting
up
that
and
putting
on
hold,
and
now
you
will
be
just
running
the
you
know.
The
cube
adm
join
come
on.
A
Absolutely
absolutely
so,
so
you
can
see
what
I've
done
on
the
worker
node
against
the
same
thing,
where
I
just
made
sure
that
they
create
cry
interface
for
the
network
was
set
up.
I
said
a
version
and
a
an
os.
A
I'm
not
sure
why
the
second
export's
never
showing
up
with
my
history
but
making
sure
I
get
to
the
opensuse
repositories,
one
for
the
cryo
one
for
lib
containers
make
sure
that
I
install
cryo
and
cryo
run
c
from
them,
enable
start
it
add,
access
to
the
kubernetes
software
itself
and
the
key
updated
etsy
host.
I
installed
the
software
I
forgot
to
do
the
update
before
then.
I
installed
cube
adm,
cubelet
cubectl
of
the
same
version
and
I
made
sure
to
hold
them
as
well.
A
So
now
my
worker
is
ready
for
that
that
join
command.
Now,
if
I
didn't
see
it
go
by,
let's
go
find
that
again.
So
if
I
were
to
grab
dash
a4
for
the
word
join
out
of
my
cp.out
file,
I
would
find
it
with
what
I
need
right
there.
So
that's
the
join
command.
Let's,
let's
see
if
I
did
my
worker
correctly
so
I
said,
join
and
it's
trying
to
connect
to
the
control.
B
A
The
calico,
that's,
the
network,
plug-in,
is
just
getting
loaded
onto
the
worker
node
so
that
it
can
start
communicating
and
handling
the
network.
The
other
various
like
core,
dns
and
other
stuff
is,
like
proxy,
remember,
talk,
cubelet,
q,
proxy
and
so
forth.
That's
running
and
I'm
guessing
that
just
after
a
second
here
I'm
going
to
try
again
everybody's
running,
I
have
a
cluster
and
cube
ctl
get
node.
They
both
show.
A
Ready
great
questions
anything
coming
my
way
from
that.
B
Awesome
yeah:
there
have
been
like
a
couple
of
questions,
but
we
were
going
with
a
you
know
with
a
full
flow,
so
didn't
want
to
break
that,
and
amazing,
like,
like,
I
said
before,
the
best
architecture,
architecture,
explanation
and
now
this
was
the
best
demo
for
kubernetes
setup
as
well
so,
and
I
think
people
are
agreeing
to
that.
So
people
have
loved
the
demo
and
we
have
a
comment
like
this
is
how
the
demo
should
be.
So
that's
that's
really
good.
B
So
a
couple
of
questions
are
there,
but
I
think
we
can
take
that.
Like
do,
we
need
to
know
the
knowledge
of
flannel
and
all
those
tools.
I
can
read
a
couple
more.
A
Yeah,
if
you,
if
you
go
in
to
which
you
may
have
already
done
so
let
me
share
that
screen.
If,
for
the
exam
topics,
if
you
look,
it
says
you
have
to
have
some
basic
knowledge
of
the
networking,
so
of
course,
what
does
basic
mean?
The
idea
being
is
that
you
should
be
able
to
to
kind
of
point
at
what
you
need
and
say.
Do
I
want
flannel?
So
can
I
identify
the
differences
between
them?
So
are
you
seeing
my
browser
now.
A
Awesome,
so
if
I
go
into
the
curriculum
overview
and
find
the
cka
curriculum
pdf,
it
gets
generated
here
and
I
scroll
down,
and
these
are
all
the
bullet
points
you
probably
went
through
this
before
you've
already
made.
Let
me
just
get
some
points
again.
Take
all
every
single
bullet
point
here,
put
it
into
a
word
processor,
underneath
write
the
commands
to
do
it
anywhere.
You
see
the
word
understand,
that's
not
what
you
think.
A
Understand
means
create,
integrate,
troubleshoot,
delete
repair
so
anywhere.
You
see.
The
word
understand
it's
much
bigger
than
you
can
recognize
that
that
is
on
a
browser
testing,
so
they
don't
actually
particular
cluster.
If
you
go
into
the
candidate
handbook,
which
I
suggest
you
read,
that's
why
the
two
documents
suggest
you
read,
it
does
mention.
If
you
look
at
the
clusters,
you
have
several
clusters
available
to
you,
one
or
two
of
them
running
calico.
The
other
ones
are
running
flannel
flannel
runs
everywhere,
but
doesn't
have
a
lot
of
features.
So
it's.
A
B
Your
name
and
that
I
can
send
you
the
50
off
certification
vouchers.
So
please,
if
you
are
watching
the
stream
anytime
later
as
well,
make
sure
you
send
me
the
send
me
your
name,
and
that
would
be
you
know.
That
would
be
awesome
and
yeah.
Thank
you
tim
for
your
time,
and
it
was
a
super
awesome
stream
to
be
honest
and
people
will
be
watching
later.
Would
love
this.
B
B
So
make
sure
you
do
that,
and
this
is
a
bi-weekly
show
on
thursdays,
8,
30
p.m,
and
we'll
try
to
get
tim
again
for
some
time,
let's
see
on
his
schedule
and
or
some
other
folks
or
it
will
be
by
myself
alone
and
we'll
be,
you
know,
continuing
some
of
the
learnings
and
some
of
the
cool
demos
like
we
did
today.
So
thank
you
tim
anything.
You
would
like
to
see.