►
From YouTube: Workshop1 : Kubernetes 101
Description
In this workshop we'll be covering basics for Kubernetes. We'll start by learning its architecture and explore different building blocks like Pods, ReplicaSets, Deployment etc. We would then explore the Service object and see how we can expose our applications with different configurations.
A
So
good
morning,
folks,
thanks
for
joining
in
this
is
nipped,
and
I
welcome
you
for
the
kcd
bangalore's
first
workshop,
which
we
are
starting
today
and
after
this
we
have
many
other
workshops
lined
up
just
kind
of
give
you
a
kind
of
give
you
a
complete
flavor
of
the
what
you
call
say,
the
clown
native
stuff
right.
So
we
start
with
qualities
101.
A
Then
we
have
life
cycle
management
for
cloud
native
apps,
then
argo
rollouts
will
be
ci
cd
thing
then
for
kiosk
engineering,
then
we'll
have.
How
do
you
deploy
sto
just
service
mesh?
For
you,
then
we'll
have
a
security
workshops
and
then
multi-cloud
communities,
lifestyle
management,
that's
kind
of
a
bit
on
advanced
side,
so
female
workshop
may
come
in
in
between
so
that
will
update.
A
But
the
idea
for
these
workshops
in
this
order
is
that
you'll
have
some
time
till
the
main
event,
and
if
you
want
to
attend
all
of
them,
you
can
do
it.
So
you
can
take
this
workshop
practice
come
to
the
next
one
and
so
on.
So
that
is
the
whole
intention
of
doing
this
workshops
over
over
the
week
over
these
four
to
eight
six
weeks,
whatever
we
have
remaining
there
so
for
for
this
particular
commodities,
101
workshop,
as
we
know
that
right
we
are
going
to
cover
this
agenda
today.
A
This
is
going
to
be
very
beginner
focused.
I
just
assume
that
you
know
a
bit
of
containers
and
docker
nothing
about
kubernetes,
so
commodity
is
going
to
start
from
very
basic.
Let
me
just
show
off
hands
how
many
of
you
have
worked
on
kubernetes
there.
A
B
Basic
set
up
a
docker
and
running
a
container
come
again,
so
I
worked
on
basic.
You
know:
setup
like
okay,
okay,
setting
up
a
docker
container,
creating
a
container
and
helping
the
team.
Okay,
I
have
a.
A
Basic
knowledge,
that's,
okay!
That's!
Okay!
Basic
is
only
what
we
are
looking
here,
not,
but
so
basically,
what
container
is?
Basically,
you
basically
package
your
application
and
put
them
in
a
box
right
and
then
that
particular
box
call.
We
call
it
as
a
container
image
and
from
that
image
we
can
create
containers
wherever
we
wish
right.
That's
the
whole
idea
that
you
take
your
application,
binary
with
dependency
put
in
a
box
or
a
package
called
image,
and
this
runs
same
everywhere.
A
This
is
going
to
save
time
for
everybody
as
we
use
them
in
environments,
yeah,
okay,
so
I'll
touch
upon
briefly,
I
think
in
our
qualities,
101
overview
like
when
we
start
this
the
course
there.
So
I
just
want
to
make
sure
that
you
have
access
this
particular
page
here.
I
am
putting
on
the
chat
again
so
this
for
this
particular
link.
A
A
You
can
click
on
this
access
course,
and
this
will
kind
of
bring
you
not
like
this
right
now.
This
will
give
you
a
button
here
of
lab
setup,
so
I'll
talk
about
that
briefly,
so
just
want
to
make
sure
that
first
of
all,
all
of
you
are
signed
in
to
the
portal
here.
Let
me
know
if
you're
facing
any
issues
in
signing
up
for
the
portal.
A
Done:
okay,
great
now,
let's
keep
it
very
interactive.
If
you
have
any
questions,
please
feel
free
to
kind
of
just
ask
away.
Don't
wait
for
me
to
kind
of
finish
a
topic
and
ask
so
whenever
you
feel
like.
Please
just
ask
the
question
yeah,
so,
okay,
anybody!
I
assume
that
nobody
is
waiting
now
so
once
you're
in
here
this
particular
platform
has
the
content.
As
you
can
see
here,
we
already
have
a
pre-recorded
content
with
the
hands-on
lab.
What
we'll
be
doing
along
with
there
is
the
lab
environment.
A
So
currently
I
see
here
open
id,
but,
as
you
just
joined
in,
you
might
have
a
button
of
lab
setup
and
you
can
choose
the
number
of
hours
you
want
to
have
so
please
choose
for
for
four
hours
or
eight
hours.
You
can
do
that
and
then
do
that
so
you'll
be
given
overall
40
hours.
I
believe,
and
you
can
choose
for
eight
hours
right
now
or
four
hours.
That
would
be
okay,
so
once
you
have
basically
trigger
the
lab.
A
A
What
you'll
be
getting
is
you'll
be
getting
a
terminal
like
this,
which
would
be
having
the
terminal
on
which
you
can
work
on
and,
as
you
can
basically
look
at
our
course
curriculum
here
right
wherever,
when
the
commands
right,
you
can
basically
because
the
lab
is
being
triggered
here,
you
can
access
or
run
the
commands,
of
course,
by
terminal
like
the
way
I
am
running
any
commands
here
right
or
I
can
basically.
A
There's
a
button
in
front
of
every
command,
you
can
click
here
and
this
would
also
run
the
command
for
you
right.
So
you
can
either
do
by
the
terminal
by
the
just
hitting
the
command
here
like
this,
or
there
is
an
id
also
on
which
you
can
write
ymils
and
so
on.
So
so
there
are
different
ways
to
interact
with
the
setup
and
hope
that
will
also
sort
the
problem
for
karthik
as
well.
If
you
want
to
just
do
it
on
the
mobile
as
well.
Okay,.
A
Stuff
and
then
we'll
jump
into
the
other
topic
there
right
so
before
I
start,
I
just
want
to
reduce
myself
quickly
and
then
we
kind
of
get
going,
so
my
name
is
nikhindra
and
I'm
founder
of
cloud
digger
before
starting
cloud
in
2015
2016.
I
worked
before
that.
I
worked
for
over
a
decade
in
different
companies,
primarily
with
red
hat,
have
experienced
a
support,
engineering
file
system,
kernel
development
and
perfect
degree.
A
In
2015
I
wrote
a
book
on
docker
and
post
that
I
realized
that
is
going
to
change
the
way
we
develop
and
deploy
the
applications
and
because
I
love
teaching,
that's
why
I
thought
of
kind
of
move
on
and
just
try
out
with
cloud
and
see
how
I
can
do
it.
But
since
then
it's
been
going
very
good,
so
I
kind
of
went
back
to
the
corporate
job
as
of
now
so
I
was
just
joined
in
so
that
I'm
giving
a
link
on
the
chat.
A
Please
follow
that
signing
for
the
quotes
and
so
on.
Yeah.
A
Yeah
then,
in
2017-18
time
frame
I
also
authored
a
course
on
introduce
kubernetes
with
linux
foundation.
That
course
is
now
taken
by
more
than
200
000
users
across
the
world.
So
if
you
go
to
cncf
dot
io
there,
they
have
a
kind
of
a
say
public
training
on
mit
cdx
platform.
A
So
if
you
look
at
this
course,
this
is
the
I
authored
course
for
the
first
time,
then
chris
is
kind
of
working
on
that
course
to
modify
it
and
make
it
up
to
date
and
I'm
also
a
cncf
ambassador
and
and
running
this
kcd
bangalore
event
along
with
others
as
well.
Okay,
so
that's
a
quick
intro
about
myself,
so
just
a
whole
session.
What
I
just
shared
a
link
on
the
chat,
I'm
going
to
put
it
again.
A
So
if
you
are
just
joining,
you
can
do
that
so
just
sign
for
the
course
there
now
in
the
course
once
you
come
in
so
in
this
course,
we
already
have
pre-embedded
slides
content
and
the
hands-on
labs.
A
So
what
we'll
do
is,
if
you
just
come
in
you,
can
trigger
here
the
lab
setup
for
eight
hours
or
for
four
hours,
and
then
it
will
kind
of
bring
up
the
environment
like
this,
on
which
we
would
be
able
to
kind
of
run
any
commands,
and
of
course,
these
commands
can
also
be
executed
from
the
terminal
itself
or
from
the
terminal
as
well
as
we
can
kind
of
do
it
from
our
platform
itself.
So,
as
you
kind
of
look
at
there's,
a
part
of
execution
is
there.
A
So
if
you
trigger
the
lab,
then
you
should
be
able
to
run
the
commands
just
like
this
as
well.
Okay,
so
I
think
that
that's
good
we'll
now
get
going
with
our
content.
What
we
have
come
here
for
so
there
is
a
quick
video
I
have,
which
I
would
have
you
play
a
bit
later
other
than
write
down.
So
this.
C
A
What
is
the
why
we
need
a
current
orchestrator?
So
if
you
have
used
docker
earlier
then
with
docker,
you
could
run
containers
on
a
single
machine,
let's
see
if
yeah.
So
let
me
just
talk
about
the
briefly:
what
containers
are
if
that's?
Okay,
so
traditionally
we
have
been
deploying
our
applications
on
physical
hardware,
so
we
used
to
have
hardware
os
on
that
and
then
we
deploy
our
applications
directly
on
those
on
that
os.
A
Now,
if
any
of
the
application
of
fails
right
or
have
some
kind
of
an
issue,
it
can
basically
make
our
entire
os
go,
go
bad
right
and
my
entire
os
can
fall
apart.
So
that's
the
problem.
What
we
had
traditional
system,
then
of
course,
in
the
last
two
days
I
mean
last
15
years,
and
so
we
kind
of
move
toward
categorize
or
virtualization
there.
So
what
we
did
basically
on
our
os,
we
deployed
a
layer
of
hypervisor
and
with
the
hypervisor
we
could
basically
partition
our
underlying
hardware
into
different
chunks.
A
We
deploy
that
any
developer
application.
On
top
of
that.
But,
as
you
look
at
here,
our
intention
was
to
run
the
application.
The
intention
was
not
to
run
the
vm
right
now.
What?
If,
if
I
take
this
application
and
run
directly
on
the
os,
what
I
have
in
such
a
way,
this
app
by
itself
behaves
as
an
independent
entity.
For
example,
my
app
has
its
own
ip
address,
its
own
host
name,
so
it
kind
of
behaves
as
an
independent
entity
and
that's
what
container
is
right.
A
A
And
now
you
know
what
you
can
say
office
work
as
well
right,
because
if
we
save
time
that's
I
mean
all
of
those
things
matter
right,
so
how
we
save
time
with
containers,
because
so
let's
say
you
have
been
in
a
traditional
app
deployment
cycle.
You've
seen
that
right,
dev
q
and
operations
right
when
developer
develops.
Something
qa
would
need
to
kind
of
test
it
right.
A
Now
you
can
run
50
of
containers,
I'm
just
giving
the
number
there
50
girls
on
the
same
machine,
which
is
50
apps.
You
can
run
that's
going
to
save
the
money
for
you
correct,
so
that's
how
the
main
reason
why
you
move
to
containers.
Of
course,
then
you
have
immutability
for
you
building
image.
All
those
things
would
come
along
with
it
now
containers
are
the
feature
of
the
underlying
os.
It
is
not
that
docker
has
come
somewhere
on
the
fly
and
just
built
everything.
A
No,
so
our
os
have
the
feature
of
arc
basically
resource
isolation,
resource
allocation,
for
example.
I
want
to
give
every
program
it's
an
ip
address,
host
name
that
we
get
from
the
feature
of
name
spaces.
A
A
Now
there
are
different
ways
by
which
you
can
connect
to
your
os
and
request
these
correct,
so
docker
you're,
getting
call
is
one
of
the
way,
but
there
are
many
other
ways
using
which
you
can
do
that.
That
is.
We
call
as
a
time
so
contain.
Runtime
is
a
way
using
which
you
basically
connect
with
an
underlying
os
and
create
the
containers
value
right.
A
But
then,
when
you
create
these
containers,
what
we
saw
here
this
container
is
running
only
on
a
single
host
right,
so
this
is
running
on
a
single
host.
Now,
if
one,
if
this
host
dies
right
for
whatever
reason,
maybe
kernel
got
corrected
in
power,
go
power
went
on
and
so
on.
Here
all
the
applications
are
also
gone
correct.
B
What
is
hypervisor
means.
E
So
one
question
here:
can
you
hear
me
yeah
very
much
indra
nabindra,
I
mean
I
understand
about
the
dockers,
a
donker
rancher.
They
they
have
a
facility
that
they
can
have
multiple
containers
of
multiple
nodes.
Am
I
right
so
why
these
are
need
of.
A
A
A
Okay
now
I
said
that
right,
if
you
have
these
containers
on
a
single
host
and
if
the
host
dies,
everything
just
go
haywire
right,
that's
where
we
need
some
kind
of
an
orchestrator
where
I
can
connect
these
multiple
nodes,
so
those
nodes
can
be
vms
physical
machines
or
any
computer
in
general,
and
if
we
form
a
cluster
to
run
our
cutters
at
scale
so
basically
think
about.
We
take
four
vms.
A
We
put
them
together
with
the
help
of
software
like
kubernetes,
and
then
we
deploy
our
apps
on
that.
By
doing
that,
we
are
going
to
get
the
features
which
are
listed
here
on
the
screen,
so
some
of
them
not
all
of
them,
but
there's
all
the
some
of
these
things.
What
we
have
listed
here
basically
go
over
there,
what
I
mean
mean
by
each
one
of
them
right,
first
of
all,
container
provisioning.
So
when
you
deploy
a
cluster,
you
don't
talk
to
us.
Individual
nodes
say
that
okay
deploy
my
applications
right.
A
A
D
A
Now,
here
let
me
go
back
yeah,
so
I
talked
about
container
provisioning
first
of
all
right,
so
what
I'm
saying
is
I'm
not
going
to
communicate
to
the
individual
node,
deploy
the
application
node
out?
Basically,
so
if
I
am
a
end
user
who
deploys
the
application
correct,
I
would
basically
send
my
request
to
the
control,
plane,
node
and
this
control
plane
node
would
be
deploying
my
applications
on
behalf
of
me
on
different
nodes
there.
Okay,
second
thing
is
fault
tolerance.
A
Now,
if
one
of
the
node
dies
in
that
cluster
automatically
the
application
running
on
that
node
would
move
somewhere
else.
What
I'm
saying
is
if
this
node
dies
automatically
the
apps
running
on
that
move,
node
somewhere
moves
somewhere.
That
is
a
node
level
fault
of
the
lens.
Similarly,
for
single
app
has
that,
for
whatever
reason
the
a
replacement
copy
of
that
would
come
up
as
well.
That
is
the
the
app
level
fault
tolerance.
What
you
have,
then
we
can
do
scaling,
which
can
be
both
automatic
and
manual.
A
So
when
you
want
to
deploy
applications
at
a
scale
right,
your
traffic
is
coming
up
more
and
you
want
to
scale
it
up
so
now
there
either
you
can
do
manual
scaling
like
manually,
say:
okay,
now
my
load
is
coming
up.
I
scale
up
or
you
can
feel
some
kind
of
automation
right
when
you
say
that
okay,
the
request
per
second
has
gone
beyond
something.
I
want
to
scale
it
up
right,
so
this
scaling
again,
you
can
do
both
app
level
node
level.
A
I
mean
you
deploy
the
single
replica
as
well,
but
in
when
you
scale
it
up,
you're,
going
to
run
multiple
copies
of
the
same
application
on
different
machines
right
or
anywhere
in
the
cluster.
So
how
would
you
kind
of
refer
them
by
a
common
name
right?
So,
for
example,
if
you
open
a
website
as
a
google.com
right,
google
would
be
getting
served
by
multiple
vms
or
nodes
behind
the
scene,
but
for
you
it's
just
a
google.com
now.
Similarly,
here
we
need
somewhere
to
deploy
multiple
replicas
of
our
applications.
A
I
want
somebody
to
refer
them,
refer
them
as
a
common
name,
so
that
is
what's
our
discoveries
again,
we'll
dig
deep
into
it?
Then
we
have
a
different
strategy
of
deployment.
So
when
you
move
your
app
from
one
to
another
version,
right
today
are
running
version.
V1
up
you
want
to
move
to
v2,
then
you
want
to
say
that
okay,
I
would
want
to
move
to
v2
only
if
p2
is
20
faster
or
I
want
to
update
on
a
rolling
basis.
A
I
suppose
I'm
running
hundred
copies
of
my
applications
right
and
I
can't
just
bring
down
all
100
at
once
and
then
create
new
ones
right.
I
want
to
do
a
rolling
update.
I
would
bring
down
few
of
them
up
and
then
bring
new
one
up
and
so
on.
So
this
is
again
a
feature
which
you'll
get
with
orchestration.
A
Then
you
have
based
on
story.
Orchestration
means
that
when
you,
so
when
you
create
these
applications
or
containers
right,
these
are
temporary
right
because
they'll
not
have
any
volume
or
any
content
right
which
is
store
for
them
permanently.
So
I
need
some
kind
of
a
volume
management
so
that
my
data
is
stored
outside
and
if
my
container
moves
from
one
root,
another
node,
I
would
move
my
data
as
well.
A
So
what
I
am
trying
to
convey
here
suppose
we
have
a
storage
which
is
a
whatever
some
public
storage
you
have
and
currently
the
storage
attacks.
This
particular
container,
for
example,
right
now,
this
container
dies
here
and
comes
on
other
machines
right,
so
I
want
to
make
sure
that
this
storage
is
moving
along
with
me.
That
is
what
I
mean
by
the
isla
orchestration,
so
I
need
all
of
these
features
to
be
in
production.
A
Correct,
any
comments
here
or
any
questions.
A
A
But
of
course
they
have
some
distinguished
feature,
which
is
some
of
them
are
just
for
themselves
itself
or
just
for
unique
for
themselves.
Right,
for
example,
amazon
ecs
would
only
work
on
amazon
right.
This
is
good
for
the
amazon
ecosystem,
but
it
is
not
good
for
me
and
you
who
don't
want
to
go
on
amazon
ecosystem,
for
example.
A
Right
so
amazon
is
ecs
is
built
for
amazon,
specifically
right,
docker
swarm
was
built
on
top
of
docker,
but
it
lacks
the
feature
which
was
needed
to
do
a
real
broad
environments,
and
then
there
were
a
lot
of
issues
in
between
so
there's
the
concept
of
commodities,
part
which
was
very
fundamental,
which
I'm
going
to
explain
that
changes
the
the
way
you
deploy
the
applications
on
on
the
cluster
right.
So
that
is
what
kubernetes
got
it,
but
not
darker.
A
So
so
those
kind
of
things
which
we
look
at
in
detail
right,
you
will
feel
that
docker
swamp
lacked
those
kind
of
features
and
understanding
which
was
needed
for
production
environment
and
that's
where
docker
form
did
not
get
enough
attention
while
hashicob
nomad
is
a
really
good
product
from
the
hashtag
team,
and
this
is
going
to
do
a
very
well
for
our
regular
app
deployments
and
upscaling
and
so
on.
But
where
kubernetes
wins
on
top
of
nomad
is
that
I
can
extend
nomad
or
sorry.
I
can
extend
kubernetes
the
way
I
would
wish
to
so.
A
There
are
ways
by
which
you
can
extend
kubernetes
for
your
own
work
for
your
customization
right,
so
that
extensibility
is
at
a
great
great
level
in
kubernetes
and,
of
course,
that
kind
of
made
it
very
much
and,
of
course,
a
popular
and
adoptable
to
different
platforms.
A
So
that's
where
kubernetes
kind
of
won
the
war
there,
and
then
it
is
now
the
leading
orchestration
engine
for
you
correct,
also
the
the
project
by
itself
right,
it
has
come
from
google,
so
google
has
been
running
containers
from
2008
onwards
and
they
have
been
using
internally
called
borg.
So
borg
is
their
project,
which
was
they
run
internally
and
they
had
experience
what
it
meant
to
run
cadets
at
scale
and
in
production,
so
when
they
saw
that
that
docker
is
becoming
popular
for
app
deployment
purpose.
A
So
that's
when
the
google
open
source
support
called
kubernetes
and
some
of
the
google
engineers
a
founder
project
and
then
com
companies
like
red
hat,
ibm
and
so
on
start
contributing
the
project
and
eventually
what
happened?
Is
this
project,
along
with
the
trademark,
a
change
to
or
ownership
change,
thus
community?
A
It
has
come
under
the
linux
foundation.
So,
let's
foundation,
I
believe
all
of
you
know
whatever
foundation
is.
It
is
like
the
foundation
which
kind
of
governs
the
roadmap
of
linux,
down
the
line
and
so
on
right,
so
future
present
in
what
is
comes
under
linux
foundation
umbrella
under
that
the
cncf
foundation
was
found,
and
this
kind
of
now
owns
the
the
officially
the
kubernetes
project
by
itself.
A
So
if
you
look
at
the
cncf
first
of
all,
there
are
many
members
with
these
foundations,
which
kind
of
help
to
basically
run
this
particular
governments
of
this
this
foundation
here.
So
there
are
different
members
like
platinum,
gold
and
silver
members,
and
we
are
also
the
cylinder
member
on
this
foundation.
A
Now
this
foundation
mix
gives
the
common
ground
to
the
different
projects
which
are
in
the
ecosystem.
So,
for
example,
let's
say
you
build
something
some
project
for
yourself
in
the
company
for
let's
say,
which
kind
of
makes
database
running
very
easy
on
kubernetes,
for
example,
and
you
want
it
to
be
adoptable
by
others
as
well,
but
as
an
individual
as
a
company,
you
might
find
it
difficult
to
make
it
marketable
and
adopt
to
basically
find
a
valid
option
there.
So
what
happens?
A
You
can
come
to
the
cnc
foundation
and
come
to
the
sandbox
stage,
if
you
like
they
like
it.
If
the
community
likes
it,
then
you
kind
of
go
to
the
stage
of
incubation
and
graduated.
So
this
is
how
the
project
kind
of
moves
different
categories
so
that
they
kind
of
get
adopted
in
different
levels
right.
So
if
something
is
graduated,
which
means
it
is
like
well
adopted,
and
you
can
scale
it
up
as
as
much
as
you
can
then
csgo
give
different
trainings
and
all
that
stuff
as
well,
so
that
I'll
just
skip
here.
A
That's
all
I
talked
about
yeah,
so
commodities
can
run
everywhere,
be
it
on
your
on
a
laptop,
be
it
on
a
public
cloud,
private
cloud,
so
commodities
can
be
run
everywhere
because
the
weight
has
been
designed.
So
it
has
something
called
as
an
api.
So
we
have
the
kubernetes
api
server
which
I'll
talk
about
later,
so
we
communicate
to
this
api
server
to
perform
any
operations
and
that
api
is
going
to
be
common,
irrespective
of
which
environment
you
are
in
correct,
which
means
you
would
not
be
bound
to
a
given
cloud
to
wider.
A
For
example,
let's
say
you
are,
you
were
on
aws
water
right,
you
know
aws
and
you
built
everything
and
around
database,
but
for
whatever
you
want
to
move
to.
Let's
say
azure
right
now,
typically,
because
if
you
are
bound
by
completely
on
aws,
it's
very
difficult
to
move
from
one
to
another
cloud.
But
if
you
are
on
top
of
let's
say,
kubernetes
right,
it's
going
to
be
same
everywhere,
so
migration
and
everything
would
become
very
easy.
A
So
it
kind
of
becomes
a
neutral
thing
where
you
can
easily
migrate
between
the
clouds
or
you
can
have
set
up
with
multi-cloud
and
all
that
stuff
right.
That's
where
kubernetes
shines
there,
so
it
can
run
everywhere.
A
It
is
very
stable
at
course,
so
communities
is
very
stable,
very
minimal
in
the
at
the
core,
but
you
can
also
extend
it
for
whatever
your
needs
are
so
like
different
companies
are
publishing
their
projects
for
their
software
on
top
of
communities
for
communities
as
a
the
cids,
which
you
call
them
to
extend
commodities
for
the
requirement
of
the
specific
requirement
there.
A
It
is
very
versatile
for
hybrid
setup,
so,
for
example,
if
you
have
on-prem
set
up
and
on
cloud
setup
because
again
the
apis
are
same,
you
can
have
the
private
setup
with
less
number
of
nodes.
Only
on
demand.
You
can
go
in
the
cloud.
So
that's
where
you
can
save
very
good
amount
of
money.
If
you
go
to
hybrid
setup.
A
Okay,
just
want
to
check
anybody
having
an
issue
with
the
left
setup,
no
right.
B
Like
in
a
single
mission,
we
are
able
to
virtualize
and
like
using
like,
like
hypervisor
or
like
different
tools
to
get
connected
about,
like,
like
a
device,
has
an
ip
right.
Like
I
mean
how
does
that
networking
happens
like
how
does
that
we
are
able
to
provide
public
ips
for
different
vms?
Like
can
you
just
touch
base
on
that
public.
A
So
see
if
again,
they
are
different,
so
that
different
ways
to
give
vms
public
ips
or
we
can
give
by
the
node
as
well.
So
if
you,
if
you
go
talk
about
vmware
or
any
other
thing
right,
there
are
tools
available
using
which
you
can.
First
of
all,
you
can
attach
a
complete
network
to
a
vm
instance
right
that
is
possibility.
Is
there
correct
and
then,
of
course,
just
the
network
layer
right?
You
can
take
it
out.
So
there
are
different
specialized.
Hardwares
are
available.
A
That
can
basically
give
your
vms
a
public
ip
as
well.
So
those
are,
I
think,
pretty
much
well
defined.
Things
are
there.
So
those
are
I
mean
the
solves
all
problem
in
the
vm
in
the
in
the
world
of
virtualization.
There.
D
B
A
E
It
was
a
very
great
great
question
which
karthik
bring
in
so
when
he
is
mentioning
about
how
the
hypervisor
vm
gonna
have
an
access
to
the
node
or
the
cluster.
Are
we
referring
here
to
the
kubernetes
cni,
which
is
the
network
plugin,
or
are
we
referring
to
the
default
network
which
has
been
set
up
when
we
by
default,
have
a
docker
been
installed.
A
So
there
are
two
things
he
asked
right.
First
of
all,
I
believe
we
just
asked
about
how
would
vms
get
the
ip
address
right,
so
the
so
vms
are
independent
of
containers
right.
You
install
vmware,
and
you
deploy
vms
on
that
right,
yes
or
you
deploy
kvm
and
deploy
vms
on
that.
So
for
that
purpose,
as
I
said
that
either
you
say
the
nsx
right
kind
of
solution
there
right
so
that
gives
the
ip
address
to
your
vms,
which
came
publicly
under
seven
correct.
A
Yes,
sir,
I'm
gonna
talk
about
commodities
terms,
we'll
deploy
these
apps,
we'll
call
them
pods
and
for
parts
I'll
touch
upon
that,
how
we
can
do
the
networking
part
for
that.
So
that's
a
different
story,
so
that's
a
cni
way
and
so
okay,
let
me
go
back
here
so
now.
Here
we
have
different
containers
running
on
different
machines.
A
Now
these
containers
within
the
cluster
they'll
have
a
private
ip
and
they
are
addressable
within
the
network
itself
correct
and
that's
where
we
will
talk
about
cni
the
cni
help
in
that
scenario,
where
you
are
connecting
the
containers
of
different
host
correct.
So
so
that
is
two
different
things.
One
is
talking
about
vms
talking
about
containers,
so
in
containers
in
a
very
minimal
setup
or
a
very
basic
requirement
setup.
A
You
need
to
have
some
kind
of
a
plug-in
using
which
I
can
connect
the
containers
of
different
hosts
that
we
call
it
as
cra,
plug-ins
and
they're
different
drivers
like
calico,
11
and
so
on,
which
would
be
there.
Okay,
I
think
I'll
come
back
to
that
questions
or
answer
in
a
detail
as
after
I
talked
about
few
more
basic
stuff
there.
Thank.
A
Okay,
okay,
all
good
okay,
so
we'll
get
going
so
now
we'll
touch
upon
the
kuwait's
architecture
here,
okay!
So
now
again
I
said
that
right
we
need
an
orchestrator
which
will
need
one
or
multiple
nodes
together
to
form
a
cluster.
A
So
here
we'll
take
an
example
of
there
are,
let's
say,
four
vms
or
four
physical
machines,
which
we
basically
consider
as
our
nodes.
Right
now
with
those
nodes,
I
am
just
going
to
create
a
cluster
now
in
this
particular
environment,
one
of
the
node
is
a
control
plane
node,
while
remaining
three
are
your
worker
machines
worker
nodes?
Are
there.
A
On
the
control
plane
node,
we
have
components
like
api
server,
controller
scheduler
and
the
college
key
value.
So
I'll
talk
about
that.
This
is
there
to
store
your
cluster
state
on
every
node.
We
have
a
cubelet
q
proxies
and
few
more
stuff
in
as
we
look
them
later,
as
a
cluster
administrator
or
as
a
app
deployer,
I'm
going
to
communicate
to
my
control
plane
to
say
that
okay
deploy
the
applications
for
me
and
so
on.
A
So
let's
look
at
the
workflow
here
so
suppose
I
am
a
app
deployer
and
I
say
that
go
ahead
and
deploy
an
application
for
me
so
deploy
let's
say
three
copies
of
an
app
for
me.
That's
the
request
sent
by
my
app
deployer,
the
application
request,
comes
in
to
the
control
plane
node
by
the
api
server.
So,
first
of
all,
if
you
look
at
here
in
the
api
server,
it
is
the
main
communication
agent
with
everybody
right
as
an
end
user.
I
am
talking
to
the
ap
server
with
the
control
plane.
A
C
A
So
we
are
sending
a
request
to
deploy
three
copies
of
an
app
for
me.
That
request
has
come
to
the
ap
server.
Now
I
need
to
somewhere
store
that
information
in
persistent
way,
and
for
that
purpose
we
have
something
called
as
key
value
store.
So
key
value
store
is
a
place
where
you
store
your
cluster
state
right
or
what
is
the
desired
state?
A
You
want
to
be
in
that's
getting
stored
in
the
key
value
store,
which
is
like
a
brain
of
a
system,
and
this
key
value
store
can
be
part
of
the
cluster
or
can
be
out
of
the
cluster,
so
it
can
be
anywhere
as
long
as
my
api
server
can
communicate
to
it.
Okay.
So
this
is
the
place
where
I'm
saying
that
okay,
I
got
a
request
to
deploy
three
copies
of
a
given
app.
A
Now
that
request
would
be
set
to
a
scheduler
where
it
would
tell
you
where
those
apps
need
to
be
deployed,
because
scheduler
knows
how
many
nodes
are
in
the
cluster.
What
are
the
configuration?
What
apps
deployed
earlier?
So
it
has
a
control
of
what
kind
of
knowledge
about
the
resource,
consumption
and
utilization
there.
So,
based
on
that
again,
you'll
send
the
request
to
the
ap
server.
Api
server
would
then
say,
go
and
deploy
that
this
app
on
a
given
node.
A
So
assuming
that
we
said
that
three
copies
of
my
app,
then
we
got
a
request
to
the
ap
server
scheduler
and
we
said
now
deploy
one
copy
on,
let's
say:
node
one
and
two
copies
of
node
two,
for
example,
so
ap
server
would
communicate
the
qubit
of
individual
nodes
and
say
deploy
one
copy
here
and
two
copy
of
the
apps.
Here
now,
once
my
app
get
deployed,
I
would
need
to
make
sure
that
they
are
running
all
the
time
correct.
A
So
if
I
said
that
okay
deploy
two
copies
for
me
correct
or
deploy
three
copies
of
the
app
for
me
right,
I
want
to
make
sure
that
at
any
point
of
time,
all
three
copies
are
running.
A
That
is
what
controller
make
sure
controller
basically
make
sure
that
at
any
point
of
time,
your
entire
clusters
desired
state
and
current
states
are
matching.
Otherwise
it
will
try
to
bring
into
the
state
which
is
desired
state
there.
So
controller
is
make
sure
that
your
all
the
apps
are
up
and
running
correct.
A
Then,
once
my
app
get
deployed,
then
we
have
these
q
proxies.
This
is
going
to
allow
the
excellent
traffic
to
come
inside
the
cluster.
That
is
what
q
proxy
is
going
to
help
me
do
from
where
I
get
the
traffic
from
the
excel
one
to
the
internal,
an
entire
cluster.
Here
it
does
more
job
which
I'll
explain
it
later,
but
for
now
q
proxy
is
there
to
get
the
excel
traffic
come
inside
the
cluster.
A
F
You
like,
for
example,
you
mentioned
that
I
need
to
deploy
three
apps
and
one
app
has
failed
or
died
that
particular
information
will
be
communicated
to
the
api
server
and
then
that
will
be
communicated
to
the
controller
because
controller
doesn't
know
anything
what
is
happening
with
the
app
so
how
and
once
yeah
so
that
part
of
communication,
if
you
can
just
explain
a
little
bit.
A
Yeah
I
just
kind
of
simplified
a
bit
so
first
of
all,
there
would
be
some
kind
of
a
health
check
that
would
be
happening
in
between
so
if,
for
example,
if
a
container
fails
just
for
the
health
reason
right
like
so
within
the
app
there
is
a
health
check.
That
is
what
cubelet
will
take
care
of
it,
that
okay,
that
one
of
the
app
is
failing.
Let
me
try
to
restart
it
here,
so
that
is
going
to
happen
within
the
node.
C
B
A
Correct
so
on
this
node,
you
would
be
running
a
few
applications
which
I
know
from
the
ap
so
from
the
key
values
to
that.
What
apps
are
running
on
that
particular
node
right
and
those
kind
of
information,
along
with
the
controller,
doing
the
check
whether
the
right
number
of
apps
are
running
or
not
again,
this
check
again
going
to
happen
by
the
ap
server.
A
Only
whether
the
right
number
of
apps
are
running
at
any
point
of
time
or
not,
and
if
they're
not
matching
correct,
then
again
they
present
talk
to
the
scheduler
again
saying
that
deploy
the
applications
again
to
make
sure
that
they
are
coming
in
the
right
state
there.
So
this
is
basically
in
a
way
that
will
just
check
whether
the
states
are
matching
or
not.
If
they
cannot
do
more
than
that
for
the
remaining
stuff,
we'll
agree
to
kind
of
go
to
the
same
state
of
ab7
scheduler.
There
make
sense.
F
Yeah
yeah
and
one
more
thing
that
the
key
value
store
will
all
will
only
be
updated
once
when
you
deploy
the
app
and
if
the
request
doesn't
come
like
okay,
fine.
Now
I
want
to
deploy
two
more
apps,
then
the
key
value
store
will
always
remain
three
and
and
at
any
point
in
time.
If
any
application
fails,
then
the
api
server
will
keep
deploying
the
the.
A
Correct
so
key
value
store,
just
stores
the
cluster
state,
what
desired
set
you
want
to
be,
and
among
with
many
other
cluster
state
like
what
are
the
app
configuration
there?
What
are
the
secrets
I
have
and
so
on?
That
is
also
go
in
the
key
value
store,
so
this
key
value
store
is
part
of
the
cluster
or
it
can
be
outside
so
like
here
we
are
talking
about
the
software
called
etcd
at
cd,
which
we
are,
which
we
typically
deploy
with
kubernetes
to
enable
the
key
value
store.
A
But
can
we
end
the
software
as
well
as
long
as
my
ap
server
communicate
to
over
http
and
do
get
put
kind
of
those
kind
of
operations
there
so
here
in
this
key
value,
store
we'll
be
having
one
leader
and
multiple
followers,
so,
of
course
I
can
have
a
single
node
as
well
of
the
key
values
tool,
but
there
is
not
for
product
environment
in
prod
environment.
A
I
would
have
multiple
replicas
of
my
key
value
store
running,
so
we
have
like
in
this
case
we
see,
there's
a
five
node
setup
of
the
key
value
store.
There
is
a
one
leader
using
which
I
would
do
all
the
writes.
The
read
can
happen
from
anywhere.
E
It's
gonna
be
very
lame
question,
sir,
when
I'm
talking
about
key
value,
stroke,
I'm
being
referring
to
my
atc.
So
can
I
have
adcd
on
the
on
on
on
a
physical
machine
rather
than.
A
Yeah,
so
you
basically
deploy
the
the
key
value
store,
which
can
be
part
of
the
node
itself
like
there.
We
have
a
third
cluster
and
my
key
value
store
can
be
part
of
my
nodes
right
or
these
can
be
outside.
I
said
that
right,
my
key
value
store
can
be
inside
or
also
of
my
nodes,
so
I
can
choose
to
deploy
the
way
I
would
wish
and
in
prod
environment
we
have
seen
that
we
choose
to
deploy
it
outside
of
the
cluster
nodes.
E
Answer
same
for
the
worker
nodes,
can
I
have
my
worker
nodes
rather
than
having
in
a
cloud
on
my
physical
machine,
but
actually
on
the
same
subnet
yeah
you
can
you?
Can
you.
A
Yeah,
so
let's
go
back
here.
So
let's
talk
about
few
more
things
here
now
here
I'm
talking
about.
There
is
a
a
single
control,
plane
node,
but
as
I
showed
in
the
other
diagram
right,
even
in
that
case,
I
can
have
my
multiple
control
plane
nodes
right
so
rather
than
just
having
a
single
node,
where
only
my
one
of
my
node
is
there
to
control
the
cluster,
I
have
highly
availability
of
my
multi,
my
control
player
node
as
well.
That
would
be
much
better
for
me
for
prod
environments.
A
Similarly,
now
let's
say
even
a
single
node
cluster,
let's
say
if
one
of
my
node
goes
down
or
not
not
the
worker
node,
let's
say
even
in
this
case,
my
master
node
goes
down.
What's
going
to
happen,
okay,
even
even
in
this
case,
if
my
master
goes
down,
my
customers
are
not
going
to
have
a
downtime,
so
customer
directly
do
the
my
application
access
via
the
node
itself,
so
customers
are
not
accessing
my
website
why
the
control
plane
don't
know
my
apps
are
deployed
on
my
nodes
and
q.
A
A
E
One
question,
sir
sorry:
it
came
to
me
for
interview,
so
if
my
master
has
been
down,
definitely
the
provisioning
will
not
happen.
Correct.
Even
the
fault
tolerance
will
not
have
been
not
been
happening.
Sir
fault.
E
But
my
existing
virtual
machine
still
be
connected.
I've
already
have
that,
but
orchestration
will
be
impacted
for
the
for
the,
for
the
mean
time
so
correct
yeah.
Thank
you,
sir.
C
F
I
have
a
quick
just
quick
question
like
so:
you
mentioned
about
the
high
availability
right,
so
if
your
master
node
is
down,
I
mean
we
still
will
have
a
the
replicas
of
that
node
right.
So
that
will
take
that
will
take
the
controls.
A
F
You,
which,
which
will
be
the
case
in
any
scenario
right
I
mean
even
in
a
production
environment,
there
won't
be
one
master
node,
so
I'm
assuming
until
all
the
master
nodes
are
down.
You
will
still
have.
That
is
the
worst
case
scenario
where
you
will
not
have
any
provisioning
done,
because
all
all
your
master
nodes
are
down,
but.
C
A
You
so
you
need
a
quorum
there
right
so
so,
for
example,
see
depending
on
how
you
have
configured
your
key
value
store.
Sorry,
let
me
just
go
back
here
see
if
your
key
value
store
is
part
of
the
nodes
right,
then
you
would
need
at
least
two
of
them
to
be
up
to
make
sure
your
std
is
in
a
high
level
state.
Are
you
getting
a
point
correct
because
you
cannot
have
the
std
running
in
silos?
A
They
have
to
at
least
two
of
them
have
to
be
in
quorum
to
make
sure
that
I
can
write
something
correct
this
in
this
case,
I
would
need
minimum
two
of
them
to
be
a
balance,
while
in
this
case
where
I
have
the
cats
xcd
outfit,
my
cluster
right,
even
if
one
of
the
node
is
there,
I'm
perfectly
fine,
correct
hope
that
makes
sense.
So
it
depends
on
how
do
you
configure
there?
Of
course,
many
things
will
go
down
the
line,
as
you
kind
of
start,
configuring,
okay,.
E
Then
the
benders
are
as
in
one
understanding,
so
when
we
create
a
gk
cluster
or
when
we
create
a
aksui
case
cluster,
we
only
get
an
access
to
the
worker
nodes.
Okay,
we
never
know
where
the
master
has
been
there
correct.
So
how?
What?
What
platform.
A
They'll,
basically,
they
basically
take
the
responsibility
of
that
right.
They
would
have
configured
highly
available
control,
play
notes
behind
the
scene
right
and
they
will
also
take
the
backup
of
xc
to
the
regular
interval.
So
even
if
node
goes
down
altogether,
I
have
actually
become.
I
can
replay
it
correct.
A
So
there
are,
of
course,
different
things
to
do
in
the
back
end,
but
at
the
same
time,
because
they
are
charging,
you
they'll
take
care
of
the
things
by
even
having
multiple
nodes,
or
do
they
take
a
backup
of
xcd
which
they
can
replay
it
on
the
fly
and
so
on
because
see
my
state
of
the
customer.
The
key
values
too
right,
if
I
have
the
backup
of
that,
I'm
perfectly
fine.
E
I
mean
that's
what
the
reason
is
that
taking
the
back
of
that
cd
is
really
difficult.
I
tried
two
times
it
really
failed
for
me.
A
No,
it's
not
difficult
just
that.
You
just
need
to
follow
the
procedure
right
yeah,
that's
it!
Thank
you,
sir
okay.
I
think
we'll
move
on
now,
so
so
yeah
so
discussing
about
so,
let's
kind
of
dig
deep
further
into
each
of
these.
So
talk
about
ap,
surviving
to
cover
most
of
the
stuff
here,
apa
server
also
handles
who
can
login
and
what
they
can
do
in
the
cluster.
That
is
also
part
of
the
api
server
here.
A
Key
value
store
stores
the
cluster
state
for
me
and
primarily
will
be
using
http
for
that,
but
it
can
be
any
other
tool
as
long
as
ap
software
communicate
to
them
controller.
Make
sure
that
at
any
point
of
time,
states
are
matching
and
there
are
built-in
controllers
into
it.
So
it's
like
single
binary,
but
it
has
multiple
controllers
built
into
it,
which
are
responsible
for
doing
the
node
management.
Like
I
have
the
right
number
of
nodes
or
not,
I
have
the
right
number
of
apps
running
or
not
right.
A
Then
your
scheduler,
which
schedules
our
applications
based
on
some
resource
constraints,
whatever
you
may
have
given.
So,
for
example,
I
want
to
deploy
my
application
with
minimum
2
gb
of
ram
or
deploy
my
node,
where
I
have
a
hardware
of
gpu.
So
all
those
things
are
can
be
configured
as
a
as
a
requirement
and
my
schedule
will
try
to
fulfill
them
to
deploy
my
applications
on
the
respective
nodes.
A
A
So
when
you
deploy
your
linux
app,
you
want
to
deploy
on
linux
machines
right.
So
you
would
say
I
would
find
first
feasible,
node,
so
I'll
say
out
of
20.
10
are
feasible
nodes,
then
I'll.
Do
a
scoring
of
individual
nodes
or
behind
the
scene
for
me,
but
best
score
node
would
be
used
for
my
scheduling
purpose.
This
is
how
the
we'll
schedule
our
application
on
the
cluster,
any
questions.
A
Okay,
next
is.
We
would
then
talk
about
a
bit
on
the
nodes
here,
so
here
on
every
node
we
have
a
component
called
cubelet,
which
is
talking
to
my
apa
server.
We
have
the
queue
proxy
which
allows
me
the
traffic
to
come
inside
the
cluster
and
one
more
java
will
do
I'll
talk
about
later.
A
A
There
are
many
more
so
you
deploy
a
container
runtime
on
every
node
and
then
we
deploy
our
applications
on
that
and
in
kubernetes
the
apps
get
deployed
in
terms
of
pod,
so
the
minimum
deployable
unit
com
component
is
called
pod
and
pod
is
the
collection
of
one
or
more
containers
so
like,
as
you
can
see
here
in
this,
we
have
a
pod
a
single
container
or
we
have
a
pod
with
multiple
containers.
Okay,
so
when
pod
get
deployed,
they
get
deployed
or
it
get
deployed
completely
on
a
given
node.
A
So
you
cannot
say:
half
of
the
part
goes
on
node.
One
half
port
goes
another
node,
a
part,
get
deployed
completely
on
a
given
node.
Okay,
let's
understand
bit
more
on
this
odd
terminology
here,
because
very
important
to
understand
so
we'll
deploy
the
part
I
mentioned.
That
board
is
a
collection
of
one
or
more
containers
right.
So,
if
I
talk
about
the
part
here,
so
let's
talk
about
this
is
the
this
is
my
part
here.
In
this
part,
I
have
a
container
one
and
container
two.
A
While
c2
server
port
8080,
so
if
I
hit
the
pods
ip
on
port
80,
the
c1
responds
back
and
if
I
hit
the
pods
ip1
port
8080
c2
response
time.
Okay,
so
this
is
how
it
is.
So
you
have
a
pod
in
which
you
can
have
one
or
more
containers.
If
you
have
multiple
containers.
They'll
share
the
same
network
name,
space
between.
A
A
A
I
have
a
part
in
this
part.
I
can
deploy
a
container
of
mysql
and
a
container
of
wordpress
right.
My
question
to
you
is:
would
this
be
a
good
approach
or
not
answer
on
the
chat?
Yes
or
no
on
the
chat?
If
I
deploy,
if
I
have
a
part
which
can
have
multiple
containers,
I
would
deploy
mysql
in
wordpress
together.
My
question
to
you
is:
would
it
be
a
good
approach
or
not?
A
Yes,
when
pete
says
yes
what
about
others?
Okay,
assuming
that
remaining
all
of
saying?
Yes,
anybody
for
no
okay,
so
this
is
possible,
but
not
a
good
approach.
It
is
not
a
good
approach.
Why
see,
for
example,
here
when
I
am
deploying
these
applications
right,
I
have
picked
up
my
wordpress
and
deployed
like
this
now
suppose
this.
If
I
just
want
to
scale
my
front
end
right
by
virtue
of
having
in
the
pod
right,
I
cannot
independently
scale
the
front
end
itself.
A
I
have
to
scale
both
back
and
forth
and
here,
which
is
a
challenging
for
me
right,
because
I
don't
want
I
what
my
need
was
to
scale
only
front
end,
because
now,
because
we
have
club
them
together,
they
have
become
a
unit
right.
So,
if
I
deploy,
if
I
scale
I
scale
in
terms
of
modern
kubernetes
right,
which
means
unnecessarily,
I
have
scaled
my
database,
which
I
didn't
intend
to
do
it
right.
E
I
mean
very
good
example,
but
I
am
not
little
clear
about
it
when
you
said
we
can
proceed
with
the
solution,
but
it's
not
a
best
practice.
Yes,.
E
I
right-
and
you
said
something
after
that-
that
we
go
go
ahead
and
do
a
scaling
on
the
pod
level.
A
Is
a
basic
deployment
unit
correct?
So
when
you
work
on
kubernetes,
the
pod
is
a
unit
that
you
work
in
right.
Yes,
sir,
so
you
have
to
scale
by
the
pod
basis
right
so
because
here
your
pod
contains
two
containers
right
both
have
to
be
scaled
together
or
both
you
get
a
copy
of
the
pod
itself.
Right.
Yes,.
E
Sir
got
it
high
horizontal
scaling
and
very
vertical
scaling
concept.
C
E
So
you
can't
go
ahead
and
scale
the
container
individually
you're
going
to
go
ahead
and
scale.
The
all.
A
We
can
do
do
that
if
there
are
different
parts
right.
That's
why,
when
you
deploy
these
micro
services
right,
that's
why
we're
going
to
partition
them
in
a
way
that
they
are
complete
by
themselves
correct.
So
here
they
are
to
independent
components,
right,
better,
deploy
them
into
different
two
different
ways:
okay,.
A
E
A
Now
this
is
not
a
good
approach
right
then.
What
is
a
good
approach
so
currently
what
we
saw
here?
It
is
not
a
good
approach
of
deploying
a
multi
containment
pod.
Then
what
is
the
good
approach,
because
a
part
can
have
one
or
more
containers.
Now.
Let's
say
you
have
a
use
case
here
so
again,
let
me
deploy
a
pod
app
here.
Suppose
again,
you
are
deploying
the
pod
here
now
in
this
part,
let's
say
you
have
an
app,
which
is,
let's
say,
a
web
server
called
nginx
correct.
A
A
This
is
serving
your
html
content
and
it
is
getting
served
to
your
audience
whatever
right,
it's
perfectly
fine,
but
now
I
tell
you
that
I
want
you
to
sync
up
this
html
content
in
every
half
an
hour
right
now.
How
would
you
achieve
this
right
now?
One
way
to
achieve
this
is
you
modify
the
docker
file
of
nginx
and
build
a
image
again,
which
would
hard
code,
this
mapping
for
you
that
every
half
an
hour's
income,
but
that's
not
going
to
be
very
good
solution,
because
what?
A
If
I
change
you
a
different
location
or
different
time
frame,
and
so
on?
So
what
I'll
do
here
is.
I
would
basically
bring
up
one
more
container
here,
correct
and
that
container
is
responsible
for
doing
the
sinking
for
me
correct
this
sinks,
every
half
an
hour
for
me
correct.
This
is
what
we
refer
as
a
something
called
a
side
car
container
which
are
doing
some
complimentary
job
for
you,
okay,
this
is
one
primary
app.
There's
a
sidecar,
controversial,
complimentary
traffic.
A
Other
thing
can
be
you
kind
of
write
logs
here
right.
These
logs
would
be
sent
by
the
logs
of
block
by
the
side
of
the
log
server
correct.
So
this
is
a
use
case
of
a
multi-container
pod,
where
you
are
doing
a
complimentary
job
may
be
the
caching
you
are
talking
about
right.
You
want
to
put
a
redis
cache
that
can
be
a
sidecar
correct,
so
those
kind
of
things
are
based,
so
they
are
not
an
independent
app.
They
are
somehow
complementing
my
work
or
job.
For
me,
correct.
E
E
If
that
comes
in
an
idea,
I
mean
except
side
cut
if
we
are
putting
all
if
you're
putting
the
previous
example.
There's
no
point
right
here.
Yes,
sir.
A
Okay,
so
that's
that's
there!
So
now
this
parts
configuration
file,
I
can
give
it
to
cubelet
in
a
json
or
a
vml
format,
so
I
can
give
that
so
that
way,
tubelet
would
run
them.
So
I
can
give
that
just
a
ml
file
by
an
api
call
via
file
location
or
an
shdp
endpoint,
or
why
an
xcd
box
the
different
ways
I
can
give
this
yml
file
for
to
my
ap,
to
my
queue
to
deploy
the
application.
For
me.
A
Okay
and
of
course
I
can
have
a
run
time
of
my
own
choice.
All
I
talked
about
this
part.
What
we
covered
just
now,
so
q
proxies
does
two
job
for
me.
One
for
getting
the
cell
traffic
come
inside
the
cluster
and
there's
one
more
job
where
it
would
talk
from
it
will
configure
my
part
communication.
So
I'll
talk
about
that
in
some
time.
So,
just
for
now,
q
proxy
is
for
accelerated
communication.
A
Then,
once
you
have
the
base
commands
installed,
you
deploy
some
additional
apps
on
top
of
it
does
not
come
on
its
own.
Like
dashboard,
login
server
metrics,
it
does
not
come
on
its
own.
It
would
be
coming.
You
have
to
install
it
separately,
okay,
okay,
so
with
that
we'll
move
on
to
the.
A
How
do
we
access
the
kubernetes
cluster?
So
if
you
have
triggered
the
lab
already,
what
happened
is
you
would
have
already
got
a
single
load
cluster
and
if
you
do
a
queue
shell
get
nodes
here,
you
will
see
that
it
is
configured
a
single
node
cluster
for
you,
which
is
both
behaving
as
a
worker
and
master
free.
So
this
would
be
sufficient
for
doing
our
hands-on
labs
yeah.
A
So
now
what's
happening
here
is
once
the
class
has
been
configured
because
I'm
not
going
how
to
be
configured
and
so
on,
because
that's
a
one-to-one
thing
assuming
that
customer
has
been
configured
for
you
right
now.
What
you
have
here
is
when
you
configure
the
cluster,
you
have
a
file
called
cubeconfix.
If
I
have
to
show
you
so
once
the
customer's
been
configured
by
default
at
the
location
of
dot,
cube
conf
in
your
home
directory
dot,
cube
config,
you
will
find
the
configuration
file.
A
This
configuration
this
config
file
says
how
we
are
going
to
connect
to
your
cluster.
So,
assuming
that
you
have
configured
cluster
on
cloud,
aws,
azure
or
wherever
it
is
right
from
a
laptop,
I
want
to
connect
to
it.
You
would
require
this
cube
config
file
on
your
laptop
now
by
default,
I'm
looking
at
this
location,
but
this
can
be
any
other
location
as
well.
So,
for
example,
if
I
let's
say
let's
say
if
I
move
it
from
here,
so
dot
q
configure
7
tnp
now
now
my
default
file
is
not
there
right.
A
So
if
I
do
cube,
ctl
get
nodes
right,
it
is
not
going
to
work
correct
because
the
config
file
is
not
there.
So
cubectl
is
your
cli
program
using
which
you
would
connect
to
your
kubernetes
cluster
correct,
or
we
can
talk
to
the
aps,
we'll
talk
about
how
we
can
talk
to
the
api
as
well,
but
this
is
a
cli
program.
This
requires
this
cube
config
file
for
you
to
connect
to
a
given
cluster.
A
Now
because
I
have
moved
the
configuration
file
somewhere
else,
I
cannot
connect
to
it,
but
at
the
same
time,
if
I
do
export
cube,
config
is
an
environment
variable
and
point
it
to
the
location
where
the
file
is.
It
will
again
start
working
because
I
am
now
telling
specifically
where
my
q
config
file
is
correct,
so
either
you
put
on
the
default
location
or
you
copy
it
anywhere
else.
As
long
as
you
have
had
the
environment
variable,
we.
A
Make
sense:
okay,
now
in
this
cube,
config
file.
If
you
look
at,
there
are
few
different
sections.
What
we
have
the
first
is
called
clusters.
Then
we
have
users,
then
we
have
context
now
in
all
these
sections.
You
look
at
this
is
kind
of
an
array
or
a
map
right,
it's
kind
of
a
list
here,
so
you
can
see
here
I
can
have.
I
could
have
listed
here,
multiple
clusters
as
well
so
cluster.
Let's
say
I
have
a
cluster
of
dev
cluster
of
qa
and
so
on
right,
so
I
can
list
every
cluster
location
right.
A
A
C
A
A
A
A
Okay
hope
all
if
you're
clear,
okay,
then
next
thing
you
want
to
talk
about
here
is
something
called
as
cubes
little
proxy.
So
so
I
said
that
right,
the
ap
server.
So
when
we
connect
to
our
kubernetes
cluster,
we
connect
to
the
api
server
correct.
A
A
So
what
I'm
doing
now
is,
I
am
running
a
command
called
cubectl
proxy.
Now
now
this
command.
I
am
running
on
my
client
side,
where
I
have
the
config
file.
So
if
I
run
this
command,
what
happens
now
is
this
cube.
Ctl
creates
a
secure
or
a
proxy
between
my
workstation
and
my
api
server.
Okay,
it
is
creating
a
proxy
between
my
local
environment
and
the
cluster,
which
means,
if
I
access
this
url
correct.
A
This
is
actually
giving
me
the
access
to
the
what
is
happening.
A
Okay,
sorry,
okay.
My
config
file
is
not
found
in
s5,
okay,
because
I
have
moved
it
right,
yeah,
that's
what
I
copied
it,
so
I
just
run
this
proxy
command
here
or
again.
So
this
is
a
tunnel
between
my
version,
my
cluster
and
now,
if
I
do
a
curl
on
this
right,
you
can
see.
C
A
This
is
giving
me
the
entire
api
server
endpoints
right
so
because
I'm
going
to
make
a
request
to
my
api
server
right.
These
are
my
different
end
points
to
which
I
can
connect
to
correct.
So
I
have
like
slash
api,
then
slash
apis
then
come
down.
I
have
slash
health
and
many
others
as
well.
Right
now
excited
one
more
time
here,
I'll
go
to
the
next
section
here.
A
So
what
view
you
see
there
is
like
a
like
list
view.
There's
a
tree
view
here
what
I
have
so
here
we
have
this
slash
api,
which
we
saw
there
slash
apis.
Then
we
have
metrics
health
and
so
on
this
we
refer
as
a
core
api
group.
So
slash
api
is
being
referred
as
a
core
api
group.
Then
we
have
slash
even
other
api
groups
right
now.
Under
these
api
groups,
we
have
versioning
here,
so
we
can
have,
let's
say:
ap
core
api
group.
V1
v1
means
stable
version.
A
A
A
If
I
want
to
work
on
a
deployment,
then
slash
apis,
then
again
I
have
a
subgroup
called
apps.
Then
I
have
versioning
here,
v1
only
v1.
I
have
now
demonstrate
deployment
and
so
on
correct.
So
so
there
are
predefined
location
or
predefined
endpoints
to
which
you
can
connect
to
work
on
a
given
object.
That
is
how
they
have
been
working.
Okay.
A
A
So
now,
if
I
have
to
create
a
pod
object
right,
if
I
have
to
create
a
pod
object,
then
I
would
hit
the
core
api
group.
So
when
you
say
that
I
want
to
work
on
a
pod
object,
I
need
to
first
come
here
then
here
and
then
here
correct
is
my
tree
view
right.
It's
my
complete
view.
So
when
I
say
api
version,
this
is
my
I'm
writing
a
vml
file.
Here
it
can
json
file
as
well
to
create
my
pod
object
right
now
to
create
a
pod
object.
A
I
need
to
hit
my
core
api
group
so
when
I
say
api
version
v1
only
right,
which
means
I
am
mentioning
here.
This
is
like
a
shortcut
or
whatever
you
call
it
so
apa
version.
V1
means
you
are
already
here.
Okay,
now,
if
you
are
here,
you
can
work
on
parts
nodes
and
so
on.
Then
we
will
say
something
called
is
kind.
A
So
just
talk
about
for
a
name
for
my
ipod
here,
then
we
have
a
spec,
so
spec
is
means
you
are
define
a
desired
state
of
an
object.
So
you
are
saying
that
I
want
to
have
a
part,
and
in
this
part
I
want
to
create
this
containers
or
these
containers
here
again
against
an
array
here.
I
can
have
multiple
containers
here
correct,
so
I
want
to
hit
the
object
of.
I
want
to
create
a
pod
object
with
this
configuration
here.
C
A
A
So,
let's
create
a
part
if
there
are
no
questions
here,
okay,
fine!
So
let's
try
to
now.
If
you
kind
of
go
to
this
part
section
here
so
we'll
find
this
pod
dot
yml,
which
I
was
kind
of
referring
here
I'll,
be
collect
one
more
time
here.
So
we
are
saying
that
go
and
hit
core
api
group.
Work
on
a
part
object
give
a
name
to
it.
We
are
giving
a
label
to
it
and
talk
about
what
labels
are
in.
A
Some
time
is
like
a
tagging
to
your
objects,
then
we
are
saying
I
want
to
define
desired
sheet
of
my
object.
So
in
this
port
I
want
to
have
a
container
and
container
name
is
going
to
be
this
one.
This
is
the
image
I
want
to
use,
and
this
is
my
port
number
okay.
So
now,
if
I
run
this
particular
command
here,
you
can
type
it
of
course,
but
I'm
just
writing
the
command
by
this.
So,
okay,
it's
already
exists
because
I
think
I
triggered
this
earlier.
A
So
let
me
just
remove
the
pod
for
now,
because
I
think
I
was
showing
you
earlier
so
I'll
just
run
this
command
here
and
you
can
see
here.
This
pod
is
created
now
correct
and
if
I
do
get
pods
here,
you
can
see
this.
My
pod
is
in
the
running
state
and
if
I
do
one
more
option
here
with
some
more
option-
minus
4
wide,
it
is
telling
you
that
pod
has
an
ip
address,
which
is
this
ip
address.
A
A
A
So,
as
you
can
see,
I
am
getting
other
information
here,
like
my
part
here,
the
the
name
space
I'll
talk
about
that
in
few
minutes.
What's
my
pods
ip
here
in
this
pod,
what
katniss
I
am
running
here!
I
have
the
information
here
then
there's
other
events
right.
So
what
happened
as
I
deployed
this
part
right,
so
this
got
assigned
to
a
node
image
already
there.
Otherwise
image
would
have
got
pulled
in
and
then
my
curtain
is
created
right.
A
A
So,
as
you
can
see,
respect
right,
this
spec
field
is
of
course
bigger
than
what
we
have
mentioned
there
right.
This
has
kind
of
got
some
default
values
as
well,
so
spec
is
the
desired,
which
I
want
right.
If
I
come
down,
there
is
a
field
call
status.
That
is
the
current
state
of
the
object.
So
when
you
look
at
any
objects,
spec
is
your
desired
state.
It
still
is
the
current
state,
so
then
control
what
kind
of
match
that
states
are
matching
there.
Okay,
to
remove
a
part,
I
can
delete
a
pod
here.
E
Sir
one
question
I
mean
putting
up
a
command
wide
and
describing
a
pod.
What
is
the
major
difference.
A
E
A
A
Yeah,
it's
kind
of
a
you
know,
one
not
on
the
same
thing,
it's
kind
of
just
a
different
way
of
different
view
of
the
same
thing
with
additional
information,
but
information
is
common.
There
yeah,
I
could
think
of
that
describe
and
by
ml
may
be
same
output
more
or
less
yeah.
It's
just
a
viable
form
so
that
you
can
format
it
better
way.
See.
If
you
look
at
him
describe
it,
don't
you
have
the
events
right.
C
A
Okay,
I'll
answer
that
so
basically,
if
you
describe
the
part,
you'll
get
it
but
I'll
talk
about
this.
Just
give
me
a
few
minutes
I'll
talk
about
the
responses
after
this.
So
next
is
a
if
I
have
a
pod
with
multiple
containers
here
right,
so
a
part
can
have
multiple
containers
in
the
spec
section.
I
would
define
this
right
container,
one
two
and
three
right
so
whatever
is
I
want
was
if
I
deploy
this
now.
If
I
do
a
get
parts
here,
you
will
see
it
saying
radius
0
of
3
okay.
A
So
now
this
part
has
3
containers.
That's
why
I'm
getting
0
of
3..
In
the
previous
case,
I
got
only
one
of
one,
because
there's
only
one
container
also
look
at
here.
I
have
used
the
apply
command
instead
of
a
create
command
in
the
last
session.
Right
so
create
can
only
create
the
object
for
the
first
time
and
make
change
on
the
fly
as
well.
So
if
I
have
made
some
changes
on
my
application,
I
want
to
reapply
them
on
the
fly
right.
I
can
do
that.
A
A
So
now
my
partner
would
be
in
the
running
state
because
all
would
have
been
pulled
up
and
if
I
want
to
remove
the
part,
I
can
again
do
a
q
field
delete
part
on
the
pod
name.
But
here
I
am
saying:
delete
minus
f
in
the
yml
file.
Okay,
now
remember
this
particular
command
does
not
remove
vmware
file.
It
just
removes
the
content
or
remove
the
object
created
from
that
particular
yml
file
so
like
here
we
are
saying
that
in
this
vml
file
I
want
you
to
create
a
part
called
multi-container.
E
In
the
multi-connect
container
part,
we
just
eliminating
the
changes
in
the
in
the
specs
form.
Am
I
right,
sir?.
A
By
default,
yes
by
default,
they
are,
but
if
you
want
to
perform
private,
they
cannot
prefix
that
you
have
to
give
a
location
for
the
pvc
right,
not
the
piece,
the
not
the
pvc,
the
private
registry
of
yours
right
suppose
you
have
a
registry
at
some
location
right
like
register.guru,
so
I'll
prefix.
That
here,
that's
all.
B
And
like
the
difference
between
the
create
and
apply
like,
is
there
anything
which
gets
recorded
to
playback
means
in.
A
A
C
A
A
Okay,
so
we
look
at
the
the
name
spaces
now
and
then
post.
It
will
take
a
five
minutes
break
after
that.
So
when
you
deploy
the
cluster
or
when
you
kind
of
have
a
cluster
which
is
shared
by
multiple
teams,
you
want
to
make
sure
that
not
one
team
is
able
to
or
kind
of
just
take
the
resources
and
then
consume
everything
on
that
cluster
or
there
right
so
or
or
just
want
to
kind
of
enable
the
multi-density
environments.
A
For
example,
you
have
a
dev
team,
prod
team
and
so
on,
and
you
want
to
have
different
users
go
on
a
specific
team
and
work
there.
So
commodities
name
is
commodities
array
using
which
you
can
basically
partition
this
cluster
virtually.
It
is
not
the
kind
of
a
hard
partitioning
per
se,
but
you
are
creating
multiple
virtual
cluster
on
the
fly
like
here.
For
example,
I
have
a
dev
team
and
a
pro
team
correct,
and
so
I
can-
and
I
was
saying
that
on
those
teams,
dev
team
can
consume
at
max
4
gb
of
ram
overall.
A
Similarly,
pro
team
can
go
up
to
8
gb
of
overall
ram,
so
I'm
limiting
the
users
of
the
dev
name
space
to
go
beyond
4g
so
that
they
cannot
consume
the
entire
resource
of
the
system
and
so
on.
Correct
also,
the
objects
deploying
are
unique
on
a
given
pod
are
unique
on
a
given
pod.
Only
so,
for
example,
my
part,
the
dev
name
space
is
different
than
the
myopod
in
the
prod
name,
space
correct.
A
A
So
now,
if
I
kind
of
look
at
get
namespace
command
here,
you
can
see
here.
I
can
see
six
of
them,
two
of
them
created
by
us
for
our
lab
purpose,
but
here
these
fours
are
there,
which
you
can
see
now
as
an
admin.
Currently
we
are
admins,
we
have
the
full
access
and
we
can
create
new
name
space
as
well.
So
I
am
deploying
a
new
name
space
called
test
here
now,
while
I
deploy
the
applications
right,
I
can.
A
I
can
hard
code,
my
what
you
call
so
I
can
hard
code
my
pod.yml
file
like
this,
so
suppose
I
had
this
dot
voil
file
in
the
earlier
section
right.
A
So
I
can
say
that
run
this
particular
part
on
the
test
name
space.
I
can
hard
code
like
this,
but
rather
than
hard
coding.
What
we
are
saying
is,
I
would
pass
that
cube,
ctl,
create
part
command
and
say
minus
n
test.
So
I
deployed
the
name
space
here
which
current
doesn't
have
any
parts
running
anymore
right.
A
There's
no
pods
in
that
name.
Space
now
correct,
but
then,
if
I
do
get
parts
sorry
create
part
and
minus
and
test
right,
I'm
saying
that
run
that
part
on
the
test
name:
space,
whether
there's
the
pod
dot
yms.
Let
me
copy
that
actually.
A
A
So
now
this
would
deploy
the
pod
in
the
test
name
space
and
if
I
do
getport
you
can
see
here
I'm
saying
mypod
in
the
test
namespace
here
and
now.
If
you
discard
this
pod
here
right
so
cubectl
describe
for
my
part,
minus
n
test
you'll,
see
here
because
of
course,
you're
saying
the
minus
n
test
here.
A
But
if
you
look
at
even
in
the
describe
the
quad,
you
see
here
where
this
part
got
deployed
and
all
those
will
come
here
correct
and
if
I
remove
the
namespace
automatically,
all
the
apps
of
that
place
will
also
go
away
because
I
am
removing
my
part
name
space.
All
the
apps
of
that
dashboard
will
be
removed
as
well.
So
now,
even
my
fastness
is
also
gone
here,
so
I'm
coming
back
to
the
normal
other
things.
B
The
best
practice
with
regards
to
like
when
we
are
actually
working
with
namespaces,
like
what
you
suggest
some
tips
or
like
some
gotchas,
which
needs
to
be
careful.
A
B
A
C
A
A
C
A
B
A
Okay,
fine,
so
with.
C
A
Information
we
are
now
going
to
next
section
here,
which
is
application,
lifecycle
management,
in
which
you
want
to
look
at
how
we
would
deploy
the
applications
in
the
way
that
it
can
scale
up
scale
down
on
demand.
So
we
just
saw
right
now
the
part
suppose.
A
If
I
would
deploy
three
copies
of
my
application,
I
would
deploy
three
parts
separately
now
right,
so
they'll
not
be
like
a
group
together
and
if
one
app
one
part
dies,
orderly
new
part
comes
as
a
replacement
of
that
right,
so
we
don't
we
or
we
want
to
kind
of
deploy
these
parts
in
a
declarative
way,
and
that's
where
we
have
this
object.
Called
replica
set
and
replica
set
help
me
deploy
these
pods
declaratively
so
now.
A
A
We
are
all
good,
but
now,
let's
see
if
one
of
the
part
dies
automatically
a
replacement
part
comes
in
place
of
it
right,
as
you
can
see,
another
part
has
come
now,
and
this
is
my
replica
set
yml
file
in
which
now
we
are
not
going
to
only
v1.
Now
we
are
going
to
slash
api's
apps
v1,
so
when
you
say
apps
v1,
it
means
it's
under
slash.
Apis,
slash,
apps,
slash,
v1
kind
of
a
thing
right,
so
that's
how
it
is
kind
replica
set,
define
the
roughly
account
and
define
your
spot
template
here.
A
So
I'll
talk
about
this
labels,
letters
in
few
minutes,
but
for
now
replica
set,
would
deploy
x,
number
of
copies
or
replicas.
We
have
mentioned
about
this
part
template
the
part
which
you
have
mentioned.
Okay,
so
let's
go
ahead
and
deploy
this
replica
set
again.
The
same
example
what
we
have
seen
kind
replica
set
replicas-
and
I
want
to
have
three
applications
of
this
given
part
right.
So
now,
if
I
apply
this
replica
set
here,
this
is
going
to
create
a
replica
set.
A
For
me,
short
form
is
rs
for
it
you
can
say,
get
rs
as
well,
and
now
you
can
see
here
it
would
create
three
parts
for
me
right.
So,
as
you
can
see
here,
three
parts
have
come
here,
so
this
is
like
replica
set
name
and
some
hash
value
we
have
now.
If
I
want
to.
If
I
want
to
look
at
if
I
delete
a
pod
here
right.
A
F
Because
so
yeah
so
like,
if
your
pod
has
not
died,
but
for
some
reason
I
mean
it
is
not
working
that
application
is
not
working.
So
that's
something.
A
A
It
will
create
a
new
one,
so
what
you're
referring
is,
let's
say,
failure
of
the
health
correct?
Okay.
So
what.
A
Is
maybe
the
the
pod
is
running,
but
inside
the
pod
the
app
has
died
correct.
So
for
that
purpose
we
have
a
different
thing
called
you
do
liveness
probe
in
the
lab
health
checks.
You
can
do
there,
so
those
are
a
different
part
which
will
configure
and
though,
in
that
case,
if
the
health
check
fails,
then
I
would
try
to
do
the
container
restarts
on
its
own
to
bring
that
up
correct,
but.
D
B
And
the
if
you
see
the
yaml
basically
means
the
amal.
We
see
this
selector.
A
A
Correct
now
let
me
deploy
one
more
part
here,
so
let
me
get
that
part
created,
so
it
would
be.
In
my
other
section,
I'm.
C
A
A
A
Here
right-
and
we
saw
that
for
for
those
three
parts
right
when
we
deployed
with
the
replica
set
here.
This
is
my
port
template.
This
is
my
port
template.
So
this
is
my
replica
information.
This
is
my
pot
template
in
this
spot
template
I
have
defined.
The
labels
called
app
is
equal
to
nginx,
so
three
of
these
pods
have
a
label
app
engine
x,
okay.
So
now,
if
I
look
at
the
command
cube,
ctl
get
parts,
and
I
put
an
option
here:
hyphen
iphone
show
hyphen
labels.
A
A
Right,
there's
no
label
has
been
given
here.
That's
why
there
is
no
pod,
in
this
case
correct
no
label
in
this
case
now
now
there's
something
called
a
selector
using
which
I
can
select
my
objects.
So,
first
of
all,
these
labels
can
be
given
to
any
objects,
not
just
pods.
I
could
label
my
different
objects
as
well
and
once
I
have
labeled
them,
I
could
select
on
them.
So
now,
if
I
do
cube,
ctl
get
pods
and
minus
l
for
selection
here.
A
A
A
A
A
A
For
the
parts,
if
they
have
a
label
app
engine
x
and
if
you
don't
have
them
create
this
one,
that's
when,
when
we
have,
when
we
don't
have
any
parts
created,
three
parts
come
up
so
being
said
that
let
me
give
an
example
to
you.
So
what
I'll
do
now?
Is
I'm
going
to
label
my
pod,
which
I
have
here
right?
I
am
going
to
put
a
label
on
this
part
here.
A
A
A
A
replica
set-
and
I
want
to
have
three
parts
which
have
a
label
app,
is
equal
to
nginx,
correct.
A
Right
and
if
they
don't
have
it
create
of
this
particular
category,
okay,
now
before
actually
do
I
just
paste
one
point:
I'm
going
to
come
back
to
it
and
then
I'll
come
back
with
this
example
again,
so
I'm
just
going
to
remove
this
particular
dot
ymil
for
now,
but
I'm
going
to
come
back
to
it
just
after
this,
but
I
want
to
show
you
one
thing
so
currently
I
have
this
replace
the
dot
wireman.
That's
perfect!
I'm
just
going
to
reply
the
file
here.
Currently
I
don't
have
any
part
running
here.
A
Correct
and
now,
if
I
do
pods,
are
there
perfect
now,
if
I
do
a
cube,
ctl
get
part
the
pod
name,
and
I
do
minus
oyml
here.
So
I'm
looking
at
the
part
and
saying
give
me
the
part
in
a
minus
or
via
ml
format.
Here
right
now,
if
you
look
at
when
I
do
this
thing-
and
I
do
a
less
of
this,
so
you
can
see
here
it
has
something
called
as
a
field
called
owner
reference.
A
F
I
have
a
question
when
I'm
I'm
I'm
trying
to
follow,
but
you
mentioned
something
like
in
the
replica
set
when
you
go
from
the
upright
you
have.
The
kind
is
replica
set.
The
name
of
the
replica
set
is
engines
and
within
ngcs
you're
saying
I
want
to
create
three
replicas
in
that
replica
set,
and
then
the
label
of
the
those
three
parts
will
be
engines
right.
Yeah.
F
Whatever
in
this
particular
case
yeah,
but
you
you
mentioned
something
before
that,
if
the
app,
if
the
label
is
not
ending,
then
you
carry.
A
So
what
I
said
right
now
is
I'm
just
saying
what
I'm
showing
trying
to
show
it
to
you
earlier.
I
had
no
pods
there.
I
deployed
this
replica
set,
which
gave
me
three
parts
correct
now
I
am
referring
one
of
the
part
here
by
the
get
part
and
minus
low
by
ml
format.
Here
I
have.
I
had
three
parts
now
because
of
a
replica.
A
I
am
looking
at
one
of
the
part
here
in
the
minus
of
varml
format,
and
here
you
see
that
there
is
a
field
called
owner
references
being
set.
So
now
this
pod
is
because
it
got
created
by
the
given
replica
set
the
owner.
Reference
is
being
set
like
that.
Okay,
it
is
saying
that,
for
this
particular
part,
the
owner
is
replica
set
name,
nginx
correct.
This
is
clear
to
everyone.
A
D
A
A
Correct
now
I
am
going
to
look
at
the
my
part
in
the
minus
o
yml
format.
Now,
if
you
look
at
for
this
particular
part,
there
is
no
such
thing
called
owner
reference,
which
means
for
this
particular
part.
There
is
no
one
that
has
been
set.
It
is
an
independent
part
which
is
like.
If
you
kill
it,
nothing
happens.
This
part
just
goes
away
because
it
is
an
independent
part
for
you.
Okay.
Now,
why
I'm
talking
about
this
particular
point
here?
A
There's
a
reason
for
that
for
sure
I'm
going
to
remove
the
replica
set
for
now
here
remove
replicas
at
engine
x,
which
kills
my
first
quads
here,
which
I
have
so
I
just
have
only
one
part
running
here
and
currently
I
don't
have
any
label
tube
correct.
It
doesn't
have
any
labels
being
set
here.
Okay,
now
I
modify
my
pod.yml
again
and
here
I
basically
add
a
label.
A
Labels
app
is
equal
to
nginx,
okay.
So
now
I
am
adding
a
label
in
my
app
engine
x.
I'm
sorry
in
this
particular
part
here.
My
part
I'm
going
to
reapply
my
part
here
and,
as
I
reapply
you
can
see
here,
there
is
a
label
has
come
to
my
part.
So
currently,
I'm
going
to
I
just
have
a
pod,
which
has
a
label
app,
is
equal
to
nginx
done
now,
I'm
going
to
apply
my
replica
set
again
here
right
now,
my
replica
set.
A
A
A
If
I
apply
my
replica
set
now
right,
you
now
see
that
it
basically
creates
only
two
new
parts.
Not
all
three
are
you
following
and
now,
if
I
describe
my
part,
you
will
see
that
owner
field
has
been
set,
because
when
my
replica
set
came,
it
found
the
part
with
app
engine
x.
It
has.
No
parent
has
taken
one
of
the
child
here
and
then
moved
on
and
just
deployed
two
more
new
parts.
F
The
question
here
I
mean-
I
don't
know
whether
you'll
discuss
about
this
later
but.
C
F
C
F
A
Right,
of
course,
there's
a
problem
of
course,
so
what
happens
is
first
of
all,
that's
why
so,
when
you
are
working
in
a
given
team
right,
you
will
be
part
of
a
given
name
space
consuming
that
right.
So
of
course
it
it.
We
will
assume
that,
okay,
because
we
are
part
of
our
same
team,
we
may
know
each
other
in
a
way.
But
that's
not
that's!
Let's
keep
this
aside,
but
as
a
guideline,
when
you
work
in
a
team
in
a
given
name
space,
you
would
basically
choose
the
labels
very
carefully
right.
A
So
at
least
you'll
have
one
of
the
unique
label
and
selected
here.
So,
for
example,
in
this
particular
case
I
can
have
not
just
one
label.
I
can
have
multiple
labels
here,
not
just
when
I
can
have
multiple
labels.
Similarly,
I
can
have
multiple
selectors,
but
at
least
one
has
to
match
here.
So
when
I'm
working
on
these
level
selectors,
I
work
on
that
that
I
have
some
kind
of
a
unique
key
which
I
am
giving
it
for
my
application.
A
So
if
I
yes,
I
can.
A
Now
assume
that
if
I
change
the
replica
set
name
here
same
with
shadowpoint
w,
I
just
give
a
name
new
here
right
now.
Let's
see
if
I
reapply
my
replica
set
now,
but
do
you
think
that
I'm
going
to
have
a
three
part
or
a
six
part
now
answer
in
the
chat?
Please
answer
the
chat
because
I
just
changed
my
replica
set
name
here
applied
it
again.
You
don't
want
to
create
a
replica
set
for
me.
My
question
to
you
is
now
we
would
be
having
three
parts
or
six
parts
there
in
the
environment.
A
Answer
on
the
chat,
please:
three:
okay,
three!
Okay!
So
overall
we'll
be
having
six
right.
Why?
Because
for
the
new
replica
set
right,
the
new
replica
set
when
it
tried
to
come
up
right,
it
would
see
that
okay,
there
are
three
parts
already,
but
they
are
having
the
leverage
I'm
looking
for,
but
these
already
have
a
parent
right.
I
cannot
just
fetch
from
somebody's
else
right
so
because
these
new,
these
parts
are
part
of
the
some
other
replica
set.
I
cannot
take
them,
which
will
create
three
new.
A
So
even
in
this
case,
labels
are
same
and
perfectly
fine
correct,
but
it's
a
good
practice.
You
basically
want
to
put
some
kind
of
a
unique
labeling
between
your
applications
that
there's
a
confusion
like.
As
you
can
see
here.
It
looks
perfectly
fine
to
work
as
it
is.
There's
nothing
happened
here,
correct.
C
A
F
Is
there,
is
there
a
no?
Is
there
no
internal
mechanism
in
kubernetes
which
can
indicate
the
user
that
we
there
are
some
pods
which
doesn't
have
a
ownership,
but
they
have
the
same
app
or
the
labels,
and
it's
going
to
be
included
in
the
replicas
that
you
want
to
continue
or
something
like
that.
You.
A
Know
there
is
nothing
like
that,
but
you
can
build
intelligence
on
their
own.
It's
like
this
is
being
said:
that's
not
part
of
the
base
cluster,
but
as
a
as
a
cluster,
as
in
cluster
manager
to
a
given
environment
right
you
can
deploy
some
kind
of
a
logic.
I
said
that
right
it
is
extensible
right,
so
if
you
can
build,
if
you
want
to
build
that,
you
can
do
that
kind
of
a
logic
and
built
in
okay,
build
on
europe,
but
does
not
come
by
default.
A
Anything
like
that,
because
this
rule
is
very
clear
here,
like
that,
what
let's
go
to
as
if
they
say,
is
working
here,
but
of
course,
as
every
cases
are
there,
which
can
become
a
problem.
But
if
you
go
look
at
the
next
example
called
object
of
a
deployment.
This
problem
is
also
not
there
there.
B
A
A
B
F
I
mean
if
I
understand
the
question,
I
think
so,
if
I
have
a
replica
dot
yml
and
I
create
a
replica
set
and
after
that,
if
I
just
update
the
same
file
and
then
apply
it
again
and
if
I.
F
No,
no,
I
I
understand
what
I'm
trying
to
say
is.
If
I
have
a
replica
set.yml
file
and
I
I
create
a
replica
set
and
it
gives
a
name
say
abc
now
I
apply,
I
apply
it,
so
it
will
create
a
like
a
replica
set
name
abc
and
there
will
be
three
parts.
F
A
New
form,
what
happens
what's
happening
at
the
high
level
right,
this
ml
file
has
no
role
to
play.
What
actually
happens
is
this
particular
request.
Whatever
you
run,
the
command
right,
these
command
goes
or
sent
over
the
rest.
Api
called
to
your
api
server
correct,
so
this
file
actually
have
no
role
to
play
at
all.
A
Okay,
so
with
that,
I'm
going
to
move
to
the
next
object
of
something
called
as
deployment
object.
Okay,
now
deployment
object
sits
on
top
of
replica
assets
so
a
way
when
we
do
our
app
deployments
in
our
cluster
or
wherever
it
is.
Typically,
we
use
the
deployment
object.
We
don't
use
the
replica
set
as
its
as
I
mean
to
deploy
the
applications.
We
use
the
obvious
deployment
creation
for
deployment.
Object,
creates
replica
set
for
us.
This
would
create
parts
for
us.
A
So
I'll
tell
you
why
we
need
separation
between
deployment
replicas
in
few
minutes,
but
for
now
deployment
object.
Create
replica
asset
will
create
parts
for
you.
Okay,
so
let's
understand
that
part
for
now,
so
let's
go
ahead
and
deploy
a
deployment
object.
So
if
you
look
at
the
yml
file,
the
yml
file
is
similar
to
the
replica
set
here.
It's
just
that
we
have
changed
the
kind
here.
A
A
So
you
apply
this
vml
file,
but,
as
you
apply
it,
some
default
values
would
automatically
get
attached
to
it,
which
we'll
see
so
now,
if
I
do
a
deployment
creation
here,
this
would
do
a
deployment
creation
which
would
create
replica
set
for
us
which
would
create
parts
so
deployment
replica,
set
parts
and
you'll
look
at
the
naming
here
right.
This
is
my
deployment
name
and
the
replacer
name
is
deployment
plus
some
hash
value
and
part
name
is
replica
set
name
and
then
some
hash
value
correct.
A
This
is
how
they
have
been
kind
of
connected
deployment,
the
place
and
parts
and,
of
course,
they'll
have
a
parent
relationship
all
right.
If
you
look
at
this
particular
part
parent,
it
will
be
replica
asset.
If
you
look
at
the
representative,
the
deployment
is
a
parent.
So
if
you
just
this,
get
the
pawn
in
a
vyaml
format,
the
owner
reference
will
be
set
accordingly.
E
Sir,
in
deployment
sets,
are
we
creating
three
replicas?
At
the
same
time,.
E
When
I'm
looking
to
the
output
of
kit
deployment,
I
can
see
desired
status
and
three,
and
there
are
three
replicas
here
for
you:
yeah
three
replica
assets.
E
Now,
here
the
label,
one
minute
here:
lubricant
ngnx,
they.
A
This
is
a
deployment
not
a
set
just
a
deployment
object
yeah,
so
just
it
just
sits
on
top
of
a
deployment,
create
replica
parts
and
I'll
tell
you
why
we
need
separation
here
in
few
minutes,
but
for
now
just
look
at
this
one
deployment
create
replica
set
which
will
create
parts
for
you.
Okay,
I'll
tell
you
why
we
need
a
separate
like
a
layer
on
top
of
the
replica
set.
Just
in
few
minutes
is
everybody
will
deploy.
The
deployment
object,
get
the
three
parts
running
for
that:
okay.
A
C
A
A
If
I
have
to
change
right,
I
can
change
in
this
volume
will
find
reapply,
which
I'll
do
it
in
few
minutes,
or
I
can
do
it
from
the
command
line
here.
So
now,
if
I
run
this
command,
this
is
basically
changing
the
deployment
desired
state
from
three
to
one.
So,
as
you
can
see,
currently,
I'm
in
three
parts,
because
my
reply
account
has
been
set
to
three
desired
count
is
three,
but
now
I
am
changing
my
replica
count
or
my
desired
state
from
three
to
one,
which
means
two
of
the
part
would
get
killed.
A
As
you
can
see,
it
happened
just
on
the
real
time
right.
I
changed
my
I
changed.
My
deployment
count
or
the
the
replicas
from
three
to
one
so
desired
count
has
changed
three
to
one,
which
means,
because
I
had
three
parts
and
two
would
be
killed
and
I
am
left
with
the
one
part
here
only
similarly,
I
can
scale
up
as
well,
and
you
can
see
now
two
paws
would
be
there.
F
A
Obviously,
these
are
all
similar
applications
first
of
all
right,
so
there
is
a
so
we
don't
communicate
to
the
parts
directly.
We
have
a
layer
called
service
on
top
of
it
right
which
I'll
talk
about
the
next
section
section
here.
So
service
is
on
top
of
it
all
the
parts.
So
when
you
hit
service
that
traffic
go
to
your
pods,
okay,
so
from
the
end
your
perspective
hitting
the
service
only
and
this
service
kind
of
do
a
load
balancing
for
me
behind
the
scenes
from
multiple
parts.
A
So
if
one
of
the
part
goes
down,
it
would
move
the
workload
somewhere
else
correct,
but
being
said
that,
if,
if
your
pod
get
killed
right
now,
those
parts
can
be
killed.
Let's
say,
for
example,
because
the
power
failure
you
don't
have
a
control
right.
If
the
power
goes
down,
everything
goes
away
there
and
there
itself
correct.
So,
first
of
all,
when
you
are
going
to
microservices
world,
you
should
be
writing
an
application
that
you
can
fail
any
point
of
time
and
you
can
know
how
to
bring
it
back
correct.
A
A
Correct.
That's
one
case
where,
where
your
note
in
the
power
failure
everything
is
down,
but,
for
example,
if
you
kill
the
part
like
very
specifically,
I
want
to
kill
a
pot
right.
In
that
case,
what
happens
is
no
new
traffic
would
be
sent
to
the
pod
which
is
getting
killed
at
the
same
time,
the
killing
part
would
have
some
graceful
period
time
to
clear
out
whatever
that
cache
and
so
on
it
may
have
so.
F
A
A
A
A
Now
you
want
to
move
your
app
from
version
v1
to
v2,
for
whatever
reason
right,
you
want
to
upgrade
to
the
newer
version
correct.
So
now,
what
are
your
options?
Your
option?
A
So
in
that
case,
what's
going
to
happen,
is
the
deployment
creates
one
more
replica
in
parallel,
which
currently
doesn't
which
doesn't
have
any
part
as
of
now?
Okay?
Now,
what's
going
to
happen,
is
this
the
deployment
part
point
to
w
two
different
replica
sets?
Then
I'm
going
to
create
some
parts
in
the
new
replicas
and
kill
some
old
ones
correct.
A
This
is
called
the
rolling
updates,
so
here
I
am
not
actually
getting
a
downtime,
because
my
applications
are
all
up
all
the
time.
But
what
may
happen
is
some
of
my
customer
may
see
an
old
view.
Some
of
them
may
see
a
new
view,
but
there
is
not
a
down
time
in
this
application,
because
I
am
upgrading
with
the
help
of
rolling,
upright
strategy
here
and
eventually,
all
of
the
part
would
move
to.
The
nearest
state
here
make
sense.
D
A
Okay,
so
let's
go
ahead
and
change
our
image
version
now,
so
I'm
just
going
to
run
this
flag
here
so
cube,
ctl
get
replica
set
I'll.
Do
a
watch
flag
here,
right
watch,
like
means
you
just
minus
w,
is
watch.
I
am
watching
on
this
object
right,
so
I
am
saying
watch
on
all
the
replica
sets
here
now.
I
am
going
to
basically
change
that
image
version
to
lets
say
version
of
stable.
A
Okay,
so
I
change
the
version
of
nginx
from
1.91
to
stable
version,
and
now
I
am
going
to
redeploy
the
deployment
right.
So
as
I
redo
this
deployment
or
reapply
it
you'll
see
that
there
is
going
to
be
one
more
replica
set
would
come
in
parallel
and
you
will
see
the
the
spot
migration
which
I
mentioned
earlier.
Okay,
so
let
me
apply
this
and,
as
you
can
see,
that
a
new
replica
set
has
come
correct
and
in
this
we
are
deploying
new
new
parts
here,
killing
some
old
ones,
and
this
is
going
to
continue.
A
A
A
So,
as
you
can
see
here
in
this
particular
configuration
I
have
specified.
Where
is
it
here
this
one?
The
strategy
is
the
default
type
is
called
rolling.
Update,
of
course,
are
some
parameters
I'll
talk
about
in
a
minute,
but,
as
you
can
see
here,
I
had
only
one
replica
set
here
then,
as
I
did
apply,
I
got
new
replica
set
and
now
you
can
see
that
in
the
newer
replica
set,
I
am
creating
some
new
parts
right
desired.
Count
is
three,
so
this
is
desired.
Current
and
ready
state
so
desired.
A
Count
is
three
the
new
replica
set
okay.
Now,
how
does
this
three
number
has
come?
So
if
you
look
at
here,
I
currently
have.
Overall,
I
have
10
parts
overall
running
and
if
you
look
at
in
this
strategy
here,
we
have
two
things
here,
which
are
the
type
and
two
parameters
here,
which
is
saying
max
surge
and
max
unavailable.
A
D
A
Set
I'm
getting
three
new
parts
created
the
same
time
saying
max
unavailable
means
25,
basically
same
time,
saying
that
how
many
I
want
to
get
removed
right,
older
version,
so
here
again
the
10
percent.
There
was
a
two
point:
five
here
is
the
floor
value
right.
So
here
two,
so
I'm
changing
my
desired
account
from
ten
to
eight
for
the
first
one,
because
I'm
trying
to
remove
two
parts
there
correct
so
ten
has
become
eight
now,
so
this
ten
would
continue
until
all
of
my
parts
are
now
being
deployed
under
the
new
replica
section.
A
A
A
One
strategy
here
called
recreate
okay.
Now
the
recreate
basically
simply
means
that
you
are
going
to
first
remove
all
the
parts
and
then
create
a
new
ones.
Okay,
so
recreate
means
that
I'm
going
to
first
remove
all
the
parts
from
the
current
replica
set
and
then
the
kill
will
encrypt
the
new
ones.
So,
in
this
strategy
I'm
going
to
have
a
downtime.
This
is
required
when
I
don't
have
the
liberty
of
running
two
different
version
of
the
application.
A
At
the
same
time,
when
I
can't
run
this
kind
of
thing
that,
where
one
version
is
new,
one
version
is
old.
I
can't
do
that
situation
there.
That's
where
I'm
going
to
use
the
strategy
of
recreate
when
you
look
at,
I
think
I
talked
about
a
bit
on
the
label
and
selectors
earlier,
so
let's
just
quickly
kind
of
review
it
so
yeah,
so
we
can
assign
labels
to
any
objects.
So
now
labels
can
be
given
to
pods
nodes
deployment
services.
A
Whatever
object
is
there
we
can
assign
labels
to
any
one
of
them.
So
now,
here
we
are
giving
a
label
like
release,
table
or,
and
then
we
have
a
release,
table
or
release
beta.
Then
we
have
something
called
as
my
or
I
o
c
and
v
dev
or
q
a
so
I
can
assign
labels
to
my
objects.
Okay,
then
I
can
do
a
selection
on
them
right,
so
I
can
do
a
selection
on
them
here.
I
am
saying
that
give
me
all
the
parts
where
release
is
set
to
stable
right.
A
So,
as
you
can
see
here,
this
one
part
and
blue
part
is
a
green
product.
Whatever
color
it
is
whatever
it
is,
but
I
am
saying
that
I
am
loosely
coupling
my
applications,
even
if
they
have
not
come
from
a
same
deployment
or
different
same
parts
whatever
is
created
so
as
long
as,
if
I'm
having
a
label
which
I'm
looking
for
I'll
select
them,
I'm
not
saying
that
they
have
to
be
coming
from
a
same
object
right.
No,
I'm
just
loosely
coupling
them
as
per
my
requirements.
A
So
here
I
am
saying
simply
that
release
select
me
all
the
parts
where
release
is
set
to
stable.
I
don't
care
from
where
they
have
come
from
correct.
So
there
are
two
kind
of
selectors
we
have.
One
is
equality
based
other
is
set
based
selection,
so
equality
based
that
we
can
say
release
is
equal
to
stable
and
not
stable
and
so
on.
A
D
A
Okay,
now
for
the
example,
first,
I'm
going
to
deploy
three
points
here
here.
One
part
has
no
labels
here.
This
is
just
my
part
with
no
labels.
Then
we
have
another
part
here
with
tier
front
end
release
prod.
This
is
another
label.
Other
part
I
have,
which
has
a
label
tf
back
front
and
beta
or
release
is
equal
to
beta.
A
So
I'm
deploying
three
boards
here
and
if
I
look
at
the
show
labels,
you
can
see
that
the
label,
what
I
have
got
correct
now,
if
I
work
on
a
equality
based
selection,
so
I
would
say
now
give
me
all
the
parts
where
tier
is
equal
to
front,
as
you
can
see
here,
tier
is
front
end
set
to
first
and
third
one.
So
those
will
be
listed
here
correct
and
I'm
saying
give
me
all
the
parts
where
tier
is
front-end
and
release
this
product.
So
I'm
just
saying
this
one.
Third,
one
right.
A
Then
I
can
say
that
give
me
all
the
parts
where
cure
is
not
susceptible,
so
tear
is
not
set
to
front
end,
which
in
this
case
the
tear
is
not
set
to
front
end,
which
means
it's
only
my
body
here.
My
body
doesn't
have
that
label
yeah
correct
and
same
I'm
saying
give
you
another
part
where
tears
value
is
given
here.
A
So
you
can
change
again
to
kind
of
choose
from
different
kind
of
combinations.
As
per
your
requirements
to
get
the
right
number
of
app
all
right
write
listing
of
the
app
okay,
then
we
have
this
set
base
selection,
in
which
I
am
doing
like
a
set
operation.
You
would
know
so
release
in
front
end
right,
so
release
in
front
end.
We
have
somewhere
release
is
either
beta
or
prod,
but
not
front
end,
which
means
for
the
first.
A
Then
I'm
saying
give
me
all
the
parts
where
pier
is
front
end
and
the
release
node
will
be
top
rod,
so
I'll
get
some
more
things
here.
Another
thing
is
okay,
so
tear
in
front
end
tier
in
front
end
and
and
what
I'm
saying
release
not
in
b
time
process
release,
of
course
in
beta.
So
I
can't
have
anything:
that's
why
I'm
wearing
nothing
here
correct
and
then
I'm
saying
give
me
all
the
parts
where
here
not
in
front
and
at
least
not
in
beta
and
plot,
which
means
this
is
going
to
give
you
this.
A
B
I
have
one
question
for
you:
the
commands
that
you
are
using
all
these
commands.
Do
we
have
any
link
or
any
cheat
sheets,
because
those
are
very
informative
commands.
We
are
noting
it
down.
A
But
these
will
learn
the
practice
there,
so
we
actually
published
a
cheat
sheet
earlier
I'll
see
if
I
can
find
and
send
across
to
you
so.
A
A
Anyway,
you
have
access,
for,
I
think,
around
whatever
four
ones,
I
think
getting
access
to
you
guys
for
the
six
steps.
Yes,
this
is
there,
and
also
now
you
can
also
combine
the
equality
base
and
set
based
as
well.
So
here
I'm
saying
that
give
me
all
the
parts
where
there
is
a
label
called
component
to
redis
and
it
will
be
end
here
and
tier
is
set
to
cache
environment,
not
to
dev.
A
So
this
is
how
you
can
kind
of
define
different
expressions,
different
equality
and
sandwich
together
and
find
a
grouping
which
you
are
looking
for.
A
A
Then
we
have
this
something
called
as
annotation
objects,
using
which
basically,
we
can
put
properties
to
your
object.
So
again,
annotations
are
your
key
value
pairs,
but
they
are
primarily
used
to
put
properties,
so
we
don't
select
based
on
these
rotation
values
or
key
value
pairs.
This
is
primary
set,
some
kind
of
properties,
for
example,
you
want
to
put
a
property
of
who
is
the
shareholders?
A
What's
the
on-call
people,
id
name
email
id
whatever,
or
you
want
to
say
that?
Okay,
how
would
I
collect
my
logs
monetary
data
and
so
on?
So
you
can
put
any
kind
of
properties
to
your
objects,
which
somebody
else
can
kind
of
look
at
and
perform
something
on
top
of
it.
So,
for
example,
the
example
which
I
gave
pointed
to
logging
and
monitoring
right,
for
example,
you
want
to
say
that
okay,
do
you
want
to
enable
monitoring
for
this
app
or
not?
A
So
that
can
be
a
property
correct,
so
my
lock
server
would
collect
the
logs
only
from
the
applications
which
would
have
this
logging
start
logging.
Something
enabled
correct
because,
eventually,
when
you
have
to
pay
you
to
pay
for
per
data
injection
and
so
on,
on
the
server
on
those,
so
you
kind
of
control,
whichever
you
want
to
do,
which
you
want
to
understand
and
so
on.
A
So
those
kind
of
pointers
can
be
useful
for
other
purpose.
Not
just
maybe
you
but
annotations
are
there
to
see
the
properties
of
our
objects,
then
what
we
can
do
is
we
also
have
something
called
as
field
selectors.
They
also
help
we
kind
of
select.
They
also
help
us
basically
select
our
objects
in
a
our
battery.
A
So,
for
example,
I
want
to
find
out
all
the
parts
which
are
in
the
running
state
right.
So
if
you
look
at
these
parts
right,
so
cube,
ctl
get
parts,
nothing
is
there.
I
removed,
I
believe
everything.
So
let
me
remove
a
pod
on
the
fly
now.
A
A
You'll
see
here
in
this
particular
part
under
the
status
right,
I
have
something
called
as
phase
is
the
running
state
right
suppose
I
want
to
select
the
parts
based
on
this
running
right
or
phase
running
state
right.
So
these
are
not
the
regular
field
which
I
am
kind
of
working
on,
but
if
I
want
to
select
on
those
fields
which
I
want
to
kind
of
query
and
so
on
right,
so
I
can
query
based
on
this
field
selectors
here.
So
I
am
saying
give
me
all
the
parts
where
stress
is
the
running
state.
A
So
these
kind
of
filters
kind
of
help
me
to
do
this
better
querying
there.
That's
a
whole
idea
about
putting
this
thing
further.
Now,
let's
say
I'm
deploying
a
new
name
space
here
called
v1.
A
I
deploy
a
pod
also
with
the
name
of
finders
in
that
namespace.
I'm
saying
give
me
all
the
parts
from
all
the
namespaces
if
they
have
a
name
set
to
nginx
correct.
So
this
kind
of
a
query
which
I
want
to
make
with
the
help
of
these
field
selectors,
I
can
make
it
easily,
rather
than
kind
of
doing
some
other
trick.
This
command
can
help
me
do
these
things
in
a
better
way.
A
Okay.
So
next
we
look
at
the
section
of
services
right.
So
so
far
we
have
deployed
these
applications
with
hello
deployment
object
and
these
apps
were
deployed
on
my
environment
perfectly,
but
now
I
want
to
expose
these
apps
publicly
or
within
the
cluster
and
so
on.
So
let's
see
what
kind
of
challenge
we'll
face
if
we
just
use
with
the
pods
ip
right
and
so
on,.
A
So
if
I
look
at
the
service
object,
why
do
I
need?
First
of
all,
so
suppose
I
have
these
pods
running.
Some
of
them
are
blue.
Some
of
them
are
green,
pods
and
so
on.
Right
now
I
have
this
parts
ipad
which
are
available
inside
the
cluster,
and
my
user
can
communicate
to
these
pods
right,
so
these
parts
can
communicate
to
that's
perfect,
but
now
what
happens
is
if
a
pot
dies
right.
A
So
previously
all
the
parts
have
an
ip
address
right
and
I
can
communicate
to
the
port
ip,
but
if
the
port
dies,
new
part
would
come
up,
it
would
have
a
different
ip
address
right.
So
unless
I
go
and
tell
my
new
user
saying
that
a
replacement
ip
is
this
one
correct
or
assume
that
if
your
app
got
scaled
up
or
scaled
down
for
whatever
reason
right,
you
would
have,
you
would
have
got
new
ips
for
the
parts
or
kill
some
more
of
them
right.
A
The
ips
are
gone
so
to
keep
telling
my
user
in
this
scenario
that
the
this
is
a
new
pods,
iep
and
so
on,
which
is
a
bit
challenging
right
like
of
course,
and
not
impossible,
but
it
becomes
things,
make
things
very
challenging
there,
so
what
we
do
is
rather
than
kind
of
accessing
by
them
with
the
pod
ip.
A
What
we
do
is
we
group
the
similar
kind
of
apps
together
right,
like,
as
you
can
see
here,
we
have
a
group
of
similar
apps
together,
so
we
have
a
group
of
app
blue
and
a
group
of
app
green
right
now
we
would
refer
this
group
as
a
service
so
that,
as
a
user,
I
communicate
this
group
only
and
this
groups
somehow
load
balances
between
my
backend
part,
which
I
may
have
so
here
now.
I
am
going
to
basically
send
a
request
to
my,
of
course,
the
service
layer.
A
Now
this
is
a
service
layer
here
which
has
a
service
name,
so
I
can
give
a
name
to
this
particular
layer
and
a
respective
virtual
id.
So
this
ip
is
not
given
to
any
interface
per
se.
It's
just
like
a
firewall
rule
saying
that
if
a
traffic
is
coming
on
this
particular
service
blue
or
this
ip
address,
I
am
going
to
forward
to
my
these
parts
behind
the
scene.
A
B
If
the
pod
dies,
then
this,
when,
when
it
is,
the
request,
is
coming,
this
labels
will
end.
D
B
And
I
don't
know
how
relevant
is
this
question,
but
this
is
the
concept
will
be
same
for
the
private
cluster
and
public.
A
A
Yeah,
okay,
great
so
now
one
more
thing
here
we
hit
so
when
looking
service
layer
right
in
this
service,
we
have
this
incoming
code
and
from
this
service
the
traffic
would
go
to
my
pods.
A
A
They
are
same
if
you
don't
specify,
but
you
can
say
that
target
mode
is
different,
so,
for
example,
you
can
hit
poor
date
yet
here,
but
when
you
go
to
the
pod,
it'll
say
5000
there,
so
you
can
do
that
if
you
need
to
right
that
is
your
kubernetes
like
service
object
here
now,
let's
look
at
a
demo
of
this
to
understand
in
a
better
way,
so
there
are
different
kind
of
service
types
we
have
so
one
of
them
type
is
called
cluster
ip,
which
is
a
default
type
as
well.
A
A
A
So
now
I
am
I'm
deploying
a
deployment
here
which,
if
you
already
have
running
you
can
have
it,
but
here
I'm
doing
a
new
deployment
creation
with
the
replica
count
only
one.
So
I'm
deploying
one
part
here:
okay,
so
again,
for
this
particular
part,
I
would
have
the
part
with
the
label
of
app
is
equal
to
nginx.
Okay,
now
I'm
deploying
the
service
here
in
this
service,
I'm
saying
so
kind
service.
Again
core
api
group
kind
service.
A
A
Here,
but
this
label
is
no
longer
used
for
me,
because
I
don't
worry
about
this
label
somebody
else.
Let
me
select
on
them
whatever.
What
I
am
looking
here
is
a
selector
here,
so
this
label
is
for
the
pod
object,
which
I
don't
care,
so
you
can
ignore
it.
What
what
you
can't
ignore
is
a
selector.
So
what
we
are
saying
is
this
service
is
going
to
select
all
the
parts,
whichever
label
happens
in
the
default
namespace
and
this
is
of
service
type
called
cluster
ip.
This
is
available
only
inside
the
cluster
correct.
A
B
So
does
that
mean
we
can
all
always
refer
with
the
name
name
as
well
inside.
A
A
A
Image
is
equal
to
nginx
alpine
version,
so
I'm
just
deploying
a
new
part
here
correct
and
if
I
do
an
exact,
so
I
know
what
so,
if
you
have
done
docker
exec
earlier
right.
Similarly,
we
can
do
a
q
fill
exec
to
go
inside
a
pod.
So
let
me
give
the
command
to
you.
If
you
want
to
try
it
out,
so
I'm
going
to
just
create
one
part
now
and
in
this
part
we'll
do
an
example.
I
want
to
do
inside
the
pot,
so
I'll
say,
exit
minus.
A
I
t
for
the
interactive
mode
and
give
my
pod
name
and
say
shell,
or
I
can
say
a
shell
like
this
only
or
I
can
say
it
can
be
hyphen
iphone.
So
whichever
works
for
you
so
now,
I
am
inside
the
part
of
engine
x.
Correct
now,
from
this
particular
part,
which
I
am
in,
I
should
be
able
to
access
my
service,
which
is
running
in
my
name,
space
with
a
given
name.
As
you
can
see
it's
my
service
name.
D
A
A
Now,
when
you
give
this
particular
name
right
what's
happening
here,
this
is
getting
converted
into
a
complete
dns
name
and
what
it
is.
It
is
the
service
name,
dot
the
name,
space
name
dot
as
your
customer
local.
This
is
the
complete
dns
or
kind
of
say
fqdn
for
your
service
in
the
default
namespace.
A
A
A
A
If
I
look
at
the
the
dns
part
of
it
right,
so
what
I
want
to
continue
yeah.
So
if
I
look
at
cat,
slash,
etc,
resolve
dot
com
file,
so
you
can
see
here
it
basically.
First
have
the
service
name,
dot
the
name,
space
dot,
hr
customer
locals.
This
is
going
to
give
you
the
service
access
to
the
default
namespace
correct.
But
now.
Similarly,
as
I
said
that
right,
I
deploy
the
app
in
the
dev
name
space
right.
A
Yeah,
as
we
see
what
did
I
do
wrong.
Sorry,
it's
engine
x,
hyphen,
svc,
sorry,
yeah,
correct,
as
you
can
see
here
now,
it
is
searching
on
the
part
from
the
dev
name
space
right,
because
I
said
now,
you
should
go
and
list
on
the
dev
name
space
applications.
Now,
as
you
can
see
here
now
this
ip
and
this
ip.
A
A
A
A
A
No,
not
suggestible
whatever
you
have
in
the
cluster
right.
What
are
the
brand
applications
yeah?
So
this
is
the
plus
side
which
available
only
inside
the
cluster.
So
what
have
typically
going
to
happen
is
suppose
in
this
case.
Our
service
blue
is
a
public
eye.
Public
facing
service
right
and
service
meter
database
right,
so
service
blue
can
be
a
public
service,
while
green
can
be
a
internal
service.
A
B
Nepetra,
the
thing
is
the
applications
which
are
need
to
be
accessible
from
outside
you're
gonna,
create
the
cluster
ap
and,
for
instance,.
B
Internal
communication,
for
example,
between
this
node
two
and
node
three,
you
mean
to
say
like
it
automatically
talks
or
we
need
to
again
create
a
service.
So.
A
First
of
all,
you
are
not
about
node.
We
talk
about
pods,
so
suppose
this
is
a
blue
app
right.
This
blue
app!
This
is
a
blue
color
right
if
the
blue
part
need
to
communicate
to
the
green
application
right.
It
is
not
that
blue
part
of
the
green
part
directly
it'll
go
by
a
service
layer.
Only
correct,
typically,
we'll
have
a
service
layer
on
top
of
pods.
B
C
A
A
Like
see
the
same
is
here
only
right:
this
is
the
cluster
right
right,
which
is
inside
the
cluster.
C
A
Which
is
available
only
inside
the
cluster
okay,
one
more
point
I
want
to
mention
is
this:
is
we
talked
about
one
role
of
q
proxy
to
get
the
excellent
traffic
inside
the
cluster
q?
Proxy
plays
one
more
important
role
here.
It
basically
connects
the
services
to
the
back
end
part.
So
what's
the
actual
traffic
see
what's
happening
is
whatever
mapping
you
are
doing
here
right.
This
is
all
logical
mapping
in
a
way,
but
you
actually
need
like
a
packet
transfer
right.
A
So
when
I
kind
of
connect
right,
I
need
to
have
a
packet
flow
saying
that
if,
if
I'm
hitting
this
particular
ip,
I
want
to
go
to
these
parts
behind
the
scene,
so
they're
kind
of
some
kind
of
a
networking
rule
which
enables
that-
and
that
is
what
cube
proxies-
also
help
us
doing
that.
So
what
q
proc
does
is
q
proxy
configures,
the
firewall
rules?
A
I
assume
that
you
know
firewall
ip
tables
on
the
linux
environment
or
anywhere
so
q
proxy,
basically
configures
a
firewall
rule,
saying
that
if
you
hit
the
cluster
id,
if
you
hit
the
cluster
ip
this
ip,
you
would
go
to
any
of
these
parts
behind
the
scene.
So
this
is
the
network
connectivity
or
how
the
packet
would
flow
right.
It
has
also
been
configured
by
the
q
proxy
on
every
node,
so
q
proxy
is
available
on
all
the
nodes.
A
B
A
D
A
Not
kind
of
try
to
just
bombay.
B
A
Then
we
have
so
this
is
only
for
internal
purpose,
but
now,
let's
say
for
kind
of
expose
these
apps
publicly
right.
So
let
me
simply
do
one
thing
here:
I
want
to
actually
set
up
a
cluster
on
distillation
as
well,
so
that
I
want
to
show
you
some
more
things
so
meanwhile
I'll
go
that
lab.
It
would
also
this
will
also
get
over.
A
This
is
how
I'm
deploying
the
managed
cluster
on
coupon
distribution.
So
let
me
go
here:
choose
my
whatever
the
version
type,
the
location,
the
nodes,
I'll
specify
very
basic
one,
and
then
just
the
hit
button.
So
it's
going
to
create
the
cluster
for
me.
So
let
me
kind
of
go
back
and
explain
you.
What
node
port
is
and
I'll
come
back
here
and
show
you
a
few
things?
Okay,
so
what
we
did
earlier
is
that
is
how
we
have
exposed.
I
mean,
in
the
previous
case,
the
service,
where
available
only
inside
the
cluster.
A
Now,
let's
say
I
want
to
make
my
apps
publicly
available
so
here
I
am
deploying
the
service
type
of
node
port
here
right
so
now,
in
this
case,
I
would
basically
create
a
service
and
say
service
of
type
node
port
here
and
I
can
so
what
happens
the
type
node
port
here,
one
port
is
being
blocked
on
all
the
nodes.
So
if
you
have
run
this
docker
command
right,
which
says
like
docker,
contain
that
on
minus
p
8000
to
80
right.
So
typically,
if
you
have
run
this
command
of
docker.
D
A
Right,
minus
d,
minus
p,
eight
thousand
to
eighty
and
let's
see
an
engine
x
right,
so
what
this
particular
container
would
do
is
we
would
it's
going
to
deploy
the
cutter
of
nginx
and
map
a
port
8000
with
the
port
80
of
the
container
correct
that
is
what's
going
to
do
correct
so,
which
means
8000
is
your
nodes
on
the
nodes
you're
going
to
hit?
So
similarly,
here
in
this
particular
case,
I'm
basically
exposing
a
port
on
all
of
my
nodes
here.
A
So
I'm
going
to
expose
a
port
on
all
of
my
nodes
here
and
when
I
hit
that
node
port.
So
when
I
say
that
I
want
to
deploy
my
service
blue
of
type
node
port,
a
port
is
being
blocked.
Like
8
000,
we
saw
on
a
single
node
think
about
the
same.
8
000
is
available
on
all
the
nodes,
so
it
kind
of
exposed
on
all
of
my
nodes
and
when
I
hit
my
nodes
ip
on
the
node
port,
correct
that
traffic
is
going
to
come
to
my
service
layer
and
to
the
pods.
A
D
B
A
Is
we
are
we
already
have
a
deployment
created,
so
we
can
skip
that
part
now
here
we
are
creating
a
service
of
type
node
port
here
type
output,
and
we
are
hard
coding,
the
port
number
here
right
now.
This
port
number
either
can
be
hard
coded.
It
should
be
between
30
000
to
32767.
Some
ranges
there
or
if
you
skip
it,
then
from
that
range
the
random
port
would
come
up.
A
So
currently,
I
have
given
a
hard
coded
number
here,
so
let
me
apply
this
so
this
way
I'm
going
to
create
a
service
of
type
node
port
here,
which
creates
an
internal
cluster
ip.
Also
by
the
way-
and
now
there
is-
a
mapping-
has
been
done
here
so
which
means
says
that
now,
if
I
hit
my
nodes
ip
on
the
node
port
right,
the
traffic
is
going
to
come
to
my
service
layer
on
port
80
right.
A
So
if
I
hit
my
node
on
the
node
port
I'll
come
to
my
service
layer,
as
I
said
that
earlier
right,
I
don't
need
to
hit
the
node
on
which
board
is
running.
So
if
I
hit
my
blue
service,
specific
node
port
on
this
particular
node
right,
the
traffic
would
come
to
the
service
layer
and
from
there
the
response
would
go
back
correct.
A
This
is
how
the
thing
is
being
configured
so
now.
What
I'll
do
is
I'm
going
to
find
out
my
node's
ip
address?
So
if
I
run
this
command
here,
cube
ctl
get
nodes,
minus
or
white.
Let's
give
my
nodes
ip
address
and
if
I
hit
my
nodes
ip
on
the
port
number.
So
if
I
do
a
curve
nodes,
I
p
10
0,
1,
32
and
port
numbers.
3
1
you'll
see
that
I'm
going
to
get
the
page
here
correct.
A
A
Yeah,
so
I'm
just
going
to
I
just
copied
called
the
q
config
file
here
on
the
other
thing,
what
I
have
so
now,
I'm
going
to
go
to
this
particular,
for
example,
slash
tmp
do
config,
so
I'm
just
copying
the
config
file
of
my
digital
cluster
paste.
It
here
set
the
environment,
variable
export
capital,
q,
config
0.2,
slash
tmpto
config
that
and
if
I
do
cube,
cpl
get
nodes.
Now
it
should
give
me
three
nodes.
If
the
cluster
is
up
and
running.
A
A
A
A
So
cube
ctl
get
nodes
now,
so
nodes
are
up,
cube,
ctl
get
pods
or
should
be
up
is
running
as
well
yeah,
and
if
I
look
at
myself
it's
also
up
right,
so
we
have
got
our
app
get
deployed
it's
available
on
the
node
on
the
node
port
here
now.
How
would
I
access
it?
I'm
going
to
find
out
my
nodes
id
so
I'll.
Do
ctl
get
nodes
minus
or
wide
here
so
now
you
can
see
that
I
have
a
keynote
cluster.
A
D
B
B
The
digital
ocean
cluster.
A
A
A
A
I
cannot
make
it
because
the
dns
name
does
not
allow
me
to
kind
of
map
the
iep
and
the
port
combination.
I
can
only
map
an
ip
here
correct,
which
means,
with
the
help
of
type
node
port.
I
cannot
make
a
dns
name
correct
unless
I
create
another
layer
on
top
of
that
which
can
be
a
let's
say,
a
proxy,
or
something
like
that,
I
can
do
in
that
scenario
correct,
but
not
in
the
regular
one.
I
cannot
do
it.
This
kind
of
a
drawback
of
this
thing
got
the
point.
A
If
I
look
at
next
is
so
this
is
this
is
of
service
type
of
node
port.
Then
we
have
a
service
type
of
load
balancer.
A
So
in
this
case
I
can
basically
tell
my
underlying
infra
underlying
infra
to
create
a
load
balancer
for
me.
So
currently,
what's
happening
is
we
saw
the
type
node
port
where
I
need
to
access
the
app
by
the
combination
of
nodes
up
in
the
port
number.
A
Now
here,
what
we
are
doing
is
we
are
saying,
expose
my
applications
on
a
load
on
a
load
balancer,
so
this
is
possible
only
in
the
infra,
where
my
kubernetes
can
communicate
to
my
underlying
info
and
request
a
load
balancer
correct.
A
So
now
here
if
I
am
running
an
app
in
my
kubernetes
cluster,
through
which
I
can
access
my
underlying
infra
and
because
load
balancer,
then
what's
going
to
happen,
is
my
infra
is
going
to
give
me
a
load
balancer
ip
or
a
dns
name,
which
I
can
access
publicly
on
the
internet
right
so
similar
thing
load
balancer
would
create
a
load
port
service
to
the
parts.
Okay.
A
A
A
A
A
D
C
A
Would
come
to
any
of
the
node
of
the
node
port
service
to
the
parts
correct,
so
just
finishing
up?
Okay,
let
it
come!
Yes!
It's
been
created.
Now
I
got
an
ip
address
and
if
I,
the
ip
address
you'll,
see
that
I
have
my
page
up
right,
pretty
straight
forward
right,
I
can
just
deploy
any
of
my
app
on
kubernetes
make
a
load
balancer
and
this
is
available.
D
A
B
One
problem
would
be
like
we
are
stuck
with
the
like
particular
hosting
provider
like
if
you
want
to
like
you,
can
you
will
have
to
use
one
single
provider
gcp
or
physical
you're
using
this
cloud.
A
D
A
A
What
what's
happening
here
is
so
currently
I'm
creating
one
load
balancer
per
service,
correct,
which
means,
if
I
have
10
services,
I'll,
deploy,
talk,
load,
balancers
right,
which
may
be
expensive
in
some
cases
correct,
because
you're
on
the
cloud
and
so
on,
you
have
to
pay
for
every
load.
Balancer
you
create
correct
if
you're
getting
an
ip
address
from
them.
B
A
A
Yeah
yeah,
so
there
we
have
so
there
only
we
have
one
load
balancer
and
that
can
map
it
to
different
applications
and
so
on.
D
A
I
think
the
workshop,
what
we
are
doing
on
istio
right
in
the
I
think.
Third
week,
I
don't
know,
there's
no
schedule
there
there.
I
think
the
team
would
be
covering
that.
What
is
that
and
so
on.
A
Okay,
so
that
is
something
which
I
want
to
kind
of
cover
in
this
training
this
session,
any
any
comments,
any
feedback
you
have.
C
F
A
F
A
A
Yeah
you
can.
You
also
give
a
course
feedback
here,
there's
a
course
feedback
formula
if
we
just
fill
it
up,
that
would
be
really
helpful.
D
C
A
Just
want
to
highlight
one
more
thing
is
on
our
clouding
website.
We
are
also
publishing
different,
hands-on
labs
so
other
than
so
what
we
saw
as
we
kind
of
deliver
these
trading
sites.
Some
of
the
content
was
to
deliver
is
primarily
focused
on
finishing
some
kind
of
toc
and
so
on,
but
then
what
you
deploy?
What
we
need
in
production
right?
They
would
be
more
than
what
you
just
teach
or
being
taught
in
the
classroom,
trainings
and
so
on.
A
That's
where
we're
kind
of
building
these
different,
hands-on
labs,
so
they
are
complete
on
its
own.
So,
for
example,
one
talks
about
a
bit
on
http
further
on
kubernetes
correct,
so
we
kind
of
touch
upon
that
in
detail,
so
you
can
also
go
and
practice
these
hands-on
labs
on
your
own
and
also
we
are
looking
for
contributors.
So
if
you
would
like
to
contribute,
build
help
us
to
build
more
hands-on
lab,
suggest
the
topics
and
so
on.
A
So
you
can
also
suggest
that,
and
we
welcome,
if
you
want
to
author
some
of
these
content,
which
would
be
available
to
other
people
as
well.
So
we
already
have
a
few
authors
on
boarded
who
are
building
the
content
with
us,
but
it
kind
of
you
can
also
do
it.
It
will
give
you
more
visibility
and
create
more
impact
in
the
community.
A
F
After
this
101
do
you
have
any
or
or
if
you
can
suggest
like
what
are
the
next,
how
to
proceed
if
somebody
wants
to
get
into
ml
ops
right,
you
know,
I
mean
like.
Are
you
coming
up
with
courses
where
you
have
machine
learning,
and
you
know
something
on
kubernetes
and
and
something
like
that
because
I
just
wanted
to
understand
like
this
is
this
is
very
basic,
but
then,
what's
next
for
someone
who
wants
to
learn
more
but
specifically
into
ml
ops,.
A
So
amazon-
I
don't
have
an
answer
for
that,
but
mlaps
is
something
like
that.
So,
first
of
all,
you
understand
more
and
kubernetes
terms
which
we
are
not
covered.
I
mean
because
the
time
constraint
and
so
on
right,
so
you
need
to
kind
of
look
at
one
of
the
full
course,
maybe
which
kind
of
help
you
out
with,
like
maybe
prepare
you
for
cq
or
ckd,
which
kind
of
gives
you
overall
picture
about
about
it
right.
A
So
like
of
course,
of
course,
we
offer
these
courses,
and
these
are
available
at
some
prices
that
you
can
buy
them.
We
also
do
some
of
these.
What
you
call
say
offline
sessions,
but
we
primarily
go
with
and
teach
corporates
and
train
their
employees
there
rather
than
doing
open
batches,
but
if
there's
a
demand
we'll
do
it,
but
otherwise
you
have
seen
the
platform
you
have
seen
how
we
deliver
so
remaining
content
on.
Other
courses
are
delivered
like
that,
so
you
can.
A
They
are
calling,
if
you
want
to
take
this,
or
there
are
other
sites
available,
udemy
and
many
others,
so
you
can
pick
up
whatever
works
for
you,
so
that's
something
which
I'll
leave
up
to
you,
but
especially
for
mls
or
my
labs.
I
have
not
his
specific
course
there,
but
once
you
understood
the
basic
concepts
envelope
will
just
like
a
piece
of
kick
on
top
of
that
or
icing
on
top
of
that
sorry.
So,
just
like
once,
you
understood
that
mlas
would
be
also
pretty
straightforward.
A
Okay
sounds
good,
and
if
you
want
to
guys
want
to
get
in
touch,
I
have
share
my
linkedin
profile
here.
We
just
want
to
be
in
touch
I'll,
be
happy
to
connect
thanks.
B
A
lot
it
was
wonderful
and
the
last
part
which,
for
example,
someone
asked
the
question.
I
guess
about
the
ml
lobs.
C
A
No,
so
actually
foreign
specific
input
on
that.
I
just
want
to
say
that
first,
we
need
to
kind
of
get
aware
of
it
overall
kubernetes
stuff
in
general,
right
like
like
the
not
say
about
the
course,
but
I
think
that
ck
or
ckd
going
to
give
you
a
basic
ground
at
what
you
need
to
know
right.
On
top
of
that
amalops.
A
I
don't
know
specifically
what
you
should
be
doing.
On
top
of
that
to
be
honest,
but
I
don't
see
any
difficulty
on
that
once
you
understood
that
concepts,
I'm
a
loss
would
be
just
like
something
some
some
bit
on
top
of
that,
nothing
to.
B
A
Foundation
would
be
there,
so
you
can
of
course
take
my
this
thing.
Yeah
courses
there
you.
C
C
B
Definitely
definitely
I'll
look
forward
and
also
I'd
like
to
contribute
for
this
to
author,
some
of
the
course
on
cloud.
Okay,
I'll
reach
out
to
you.
Yes,.
A
They
cannot
complain,
they
can
be
simple,
hands-on
labs.