►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
We
appreciate
you
joining
our
live
q,
a
today
about
debugging
kubernetes,
or
what
to
do
when
it
goes
wrong,
because
it
will
and
join
with
me
here
today.
I'm
Tim
Saul
had
a
pre-sales
us
here
at
Appia
and
I'm
joined
by
Graham
and
Salman
I'll.
Let
you
to
introduce
Graham
if
you
want
to
go
first
you're.
First
on
my
video
over
here,
yeah.
B
Hey
everyone
so
yeah
Graham,
Coleman
I'm,
a
pre-sales
engineer
for
ATM
Fair
long
background
in
in
it
kubernetes
distributed
computing,
so
I
kind
of
spent
many
as
a
red
hat
working
with
openshift
and
prior
to
that
working
kind
of
in
the
integration
and
Je
space
oh
hand
over
to
salmon.
C
Excellent,
thank
you
very
much.
Welcome
everybody.
It's
good
to
see
some
familiar
faces,
yeah
welcome
again,
yeah
awesome,
so
my
name
is
harmonic.
Baal
I
am
a
Solutions
engineer
at
Napier.
I
think
it's.
C
My
title
has
changed
recently,
I'm,
not
sure,
but
I
honestly,
since
engineer
happier
and
I
have
been
working
with
kubernetes
for
the
last
three
and
a
half
for
four
years
and
I'm
I'm
confirm
I,
come
from
developer
background,
I've
been
a
developer,
but
I've
been
doing
like
kubernetes
work
for
the
last
two
and
a
half
four
years,
I
mainly
focus
on
machine
learning
inside
kubernetes,
so
trying
to
run
kubernetes
workload
at
scale
scale
yeah.
So
that's!
That's
me
and
teaching
people
kubernetes.
B
C
A
A
Yeah
and
I'll-
just
let
you
know
this
is
a
an
informal.
So
this
is
a
live
q.
A
feel
free
to
ask
any
questions.
Come
off
mute,
you'll
notice
that
everybody's
allowed
to
talk.
So
if
you
you
have
something
while
we're
doing
this,
you
can
throw
it
in
the
chat
and
throw
it
in
the
Q
a
you
can
unmute
feel
free,
we'll
make
this
a
discussion.
So
you
know
with
that
I'm
just
gonna
go
ahead
and
hand
it
on
go
ahead
and
get
started.
C
Yeah
I
think
so
there's
a
few
things
we
we
thought
we'll
discuss,
but
before
we
do
that,
one
of
the
main
things
that
I
think
we're
going
to
focus
on
is
what
actually
happens
when
you
deploy
your
application
in
kubernetes.
So
we're
going
to
focus
on
that
when
you
deploy
an
application
kubernetes.
What
does
it
look
like
and,
more
importantly,
what
are
things
you
need
to
watch
out
for
when
you're
deploying
an
application
so
we're
going
to
deploy
a
simple,
app,
initially
see
how
how
that
goes?
C
It'll
we'll
talk
about
some
debugging
techniques
while
we
deploy
it
and
then
Graham
and
Tim
are
also
going
to
talk
about
some
some
other
issues
that
you
might
come
across
some
some
errors.
They
might
have
seen
always
like
crash
the
back
off
and
whatever
these
errors
might
be
so.
A
C
So,
every
time
you
try
and
deploy
something
because
there's
so
many
moving
Parts,
it
might
seem
like
there's
a
lot
but
well
hopefully
we'll
try
and
we'll
try
and
we'll
we'll
do
a
demo
and
we'll
we'll
talk
about
that.
So
if
I
I'm
gonna
share
my
screen
for
a
few
minutes
and
we'll
go
from
there,
if,
if
you're
expecting
something
else,
please
let
us
know
we're
going
to
talk
about
that
too.
Whatever
we
need
to
discuss,
we
can
talk
about.
C
A
C
So
real
quick,
we
have
a
kubernetes
cluster,
we
have
a
control
plane
and
we
have
two
worker
nodes
and
then
basically
the
control
plan
and
the
worker
nodes.
We,
when
we
submit
any
of
your
applications
that
need
to
be
deployed,
we
deploy
them.
C
We
submit
them
to
the
kubernetes
control,
plane,
control,
plane,
receives
request
and
it
deploys
tries
to
deploy
anything
deploy,
workloads
inside
your
worker
nodes,
so
you
could
have
as
many
workloads
as
you
like
on
inside
the
cluster,
but
in
this
case
I'm
showing
those
two
I
have
a
local
cluster
which
is
running.
You
could
use
anything
you
like,
but
that
cluster
itself
just
has
a
single
node.
C
So
that's
what
that
is
and
what
actually
happens
when
we
deploy.
Where
are
we
going?
Here's
the
bit
so
here's
the
thing:
what
does
the
structure
of
a
kubernetes
application?
Actually
look
like
you
deploy
you
create,
what's
known
as
a
pod,
as
you
might
be
aware,
and
inside
a
pod,
you
define
what
kind
of
image
you'd
like
to
run
and
we'll
show
an
example
in
a
few
minutes,
and
you
can
deploy
that
part.
Usually
you
don't
deploy
a
pod.
C
We
deploy
what's
known
as
a
deployment,
because
deployment
has
some
information
about
what
we're
about
to
deploy
so
the
name
of
the
image
and
how
many
replicas
we
would
like
to
have,
and
then
deployment
looks
after
that
that
part,
if
it
crashes,
it
will
bring
it
back
up
back
up
to
the
desired
state.
C
That's
fine!
It
will
run
itself,
but
what
about?
If
you
have
multiple
replicas,
if
you
have
multiple
replicas,
how
do
we
decide
where
to
send
the
information?
How
does
one
application
that's
running
inside
a
pod
communicate
with
another
application?
This
is
where
we
bring
in
what's
called
service.
Now
the
service
is,
you
can
think
of
it
as
an
internal
load
balancer.
It
decides
where
to
Route
the
traffic
to,
and
then
that's
all
internal,
though.
What,
if
you
want
to
access
the
website
outside
of
a
cluster?
C
That's
where
we
create
an
Ingress,
and
this
is
what
we're
going
to
do,
I'm
going
to
create
a
deployment
we'll
have
one
replica.
Initially,
we
can
change
it
replicas
if
we
need
and
then
we're
going
to
create
a
deployment
and
we'll
talk
about
once
you
do
a
deployment.
How
do
you
know
what
you've
deployed
is
correct,
or
how
do
you
debug
the
issues?
Does
it
sound
good
to
everybody?
Let
us
know
in
the
chat
chat
anything
from
your
Tim
and
and.
C
C
Send
it
over
to
you
later
on
so
I
have
this
this
cluster,
it's
just
one
node
and
I
haven't
really
deployed
anything
inside
this
cluster
I
can
do
Cube
still
get
nodes,
Cube
CTL,
get
pods
or
deployments
or
anything
like
nothing
is
deployed
in
the
default
namespace.
So
it's
not
there
we're
going
to
go
ahead
and
now
create
a
deployment
so
I'm
going
to
bring
up
the
code.
C
We
are
going
to
run
a
container
but
which
container
well,
let's
just
try
and
run
the
container
itself
first,
so
this
is
Stefan
has
cured
this
container
shout
out
to
Stefan
pod
info.
So
if
I
can
just
without
kubernetes,
imagine
this
this
just
a
static
page,
it
serves
a
page
which
you'll
show,
which
we'll
show
in
a
second
I
can
do.
I
can
do
this
thing,
Docker
run
and
I
can
try
and
access
the
Pod
container
first
to
make
sure
it's
all
correct.
C
If
it,
if
there's
something
wrong,
we'll
figure
out,
we
can.
We
can
see,
there's
something
you
we
can
have
a
look
at
the
container
itself,
but
here's
what
we
can
do
I
can
do.
This
I
can
use
this
flag
Dash
p,
to
connect
my
machine
to
what's
running
inside
the
container,
because
the
container
is
isolated
and
I
can
do
this.
Dash
Docker
run
Dash
p
flag
and
the
container
itself
when
it
runs
it.
The
website
runs
on
this
port
9898.
So
we'll
do
that
9898.
C
C
C
So
here
you
go
and
let's
open
this
deployment
file.
This
is
what
a
deployment
file
looks
like
and
there's
some
information
in
here
ignore
this
for
a
second
we'll
come
back
to
it,
but
at
the
top
it's
a
big
enough.
I.
Think
it's
big
enough
right.
You
can
all
see
at
the
top.
We've
got
some
information
about
what
kind
of
resource
we're
creating
we're
going
to
create
a
deployment
if
I
come
down
to
the
bottom.
This
is
perhaps
the
most
a
useful
bit.
C
What
kind
of
image
we
like
to
run
so
this
could
be
any
image
that
you're
trying
to
run.
This
is
going
to
pull
this
image
from
a
container
registry
and
then
a
couple
of
things
in
here
got
the
name.
We
have
image,
pull
policy
and,
more
importantly,
what
port
does
that
website
run
on
because
it's
a
website
and
then
what
we've
got
in
here
is
a
bunch
of
labels.
We'll
explain
labels
in
a
few
seconds.
Why
do
we
need
them,
and
this
is
where
things
usually
go
wrong
around
labels?
C
What
we've
got
in
here
is
some
some
metadata
information.
So
what
we're
going
to
do
is
create
this
deployment
and
then
we
will
check
what
happens
when
we
do
this
deployment,
but
we're
not
running
if
you,
if
I,
wanted
to
deploy
multiple
replicas
I
can
just
come
in
here
and
change
the
replicas
to
four
or
whatever,
but
we'll
just
deploy
one
replica.
So
that's
the
deployment
file.
C
If
I
do
Cube,
CTL
apply
minus
f
deployment.yaml,
it's
going
to
come
through
and
it
says
that
deployments
has
been
created,
so
I
can
check
that
just
make
sure
it's
all
correct.
This
is
this
gotta
do
this,
it
says
yeah,
there's
one
part
it's
up
and
running.
That
shows
it's
a
good
sign
that
stuff
is
running
and
I
can
check
the
Pod
itself.
C
Excellent
question
there
so
because
in
a
deployment
we
can
Define
replicas
so
in
the
deployment
in
here
we
can
say
Harmony
replicas
we're
running
in
this
case.
If
we
don't
Define
the
number
of
replicas
it
will
default
to
one.
So
all
this
is
saying
is
that
you
asked
for
one
and
you
have
one
pod,
which
is
ready.
That's
the
desired
State
configuration
for
kubernetes.
It
brings
you
close
to
state
if
I
change
this
to
two
and
if
only
one
was
up,
it
will
say
one
out
of
two
already
doesn't
answer
your
question.
C
B
C
Oh
excellent,
very
good,
uh-huh.
This
is
this
this.
This
is
really
good.
How
does
kubernetes
know
your
your
container
or
your
pod
is
ready.
There's
two
things
inside
kubernetes
there's
some
probes
in
kubernetes
that
you
can
use,
there's
a
redness
probe
and
there's
a
liveness
problem,
there's
also
a
startup
Pro,
but
there's
a
redness
probe
that
you
can
configure.
C
For
example,
you
can
say
Well
if
you
know
when,
when
your
application
starts
up,
you
can
you
can
check
if
the
process
is
up
and
running,
that's
where
you
use
a
liveness
probe,
but
your
application
could
be
up
and
running,
but
your
application
might
not
be
ready
to
serve
traffic.
Perhaps
it
needs
to
load
some
data
from
from
a
database
and
upload
in
a
cache.
So
you
can
configure
this
Readiness
probe
to
check
against
your
application
and
say
it's
ready
now.
You
didn't
see
me
configure
that
at
all.
C
If
I
don't
say
it
to
it,
the
configure
a
redness
probe,
you'll
assume
it's
up
and
ready
we're
lucky
in
this
case,
because
it's
just
a
static
page,
it's
ready
to
serve
traffic,
but
this
is
something
that
you
can
look
at
to
check.
If
it's
you
know,
if
your
application
is
Ready
or
Not
by
configuring,
these
probes,
maybe.
B
Because
it's
kind
of
one
of
the
things
if
you're
debugging,
right
and
and
it's
correct-
it's
never
ready.
So
your
container
is
not
in
a
ready
state.
Take
a
look
at
the
within
the
Readiness
probes
that
it's
been.
Has
it
had
any
configured?
It
might
be
that
something's
configured
that's
pointing
to
something:
that's
not
loaded
or
it's
a
static
web
page
that
is
trying
to
reach
as
the
as
the
probe.
But
that's
not
loading.
So
it's
the
type
of
thing
you
can
kind
of
First
Look
at.
Why
isn't
it
ready.
C
Yeah,
that's
excellent,
yeah,
so
I
think
that's
yeah!
That's
that's
perfect!
That's
it!
That's
a
really
good
example.
So
this
is
liveness
or
redness
is
similar
check,
but
I'm
gonna
use
this
liveness
example
as
as
Graham
was
saying
you
define
in
your
probe,
so
this
is
liveness
probe.
You
can
Define
the
redness
probe
and
exactly
what
Graham
was
saying.
You
have
to
configure
a
path
in
your
application.
You
have
to
give
an
endpoint,
but
this
is
the
end
point
that
we're
going
to
use
and
and
in
that
you
can
write
any
logic.
C
You
write
that
you
like
and
usually
written
in
this
case,
you're
going
to
return
a
HTTP
code
200..
If
you
get
a
200.
That
means
it's
all
good.
If
the
process
is
up
and
running
it
return
to
it
returns
200,
you
know
like.
Oh,
it's
it's
live,
but
you
can
write
any
logic
you
like
and
as
Graham
says
check.
C
If
this
is
correct,
if
the
path
is
correct,
if
the
port
is
correct
and
also
sometimes
your
application
might
take
a
little
while
to
start
up
a
little
while
to
get
ready,
so
you
can
add
in
some
delays
in
the
beginning,
so
it
doesn't
start
checking
until
it's
actually
ready
to
be
checked.
A
Good,
could
that
be
something
like
you
know?
You
had
a
dependencies
like
a
database
or
something
like
that
you're
waiting
for
that.
You
know
if
you're
deploying
all
this
at
once
you're
waiting
on
that
to
come
up,
give
it
like
20
or
30
seconds
for
that
to
finish
doing
what
it
needs
to
do
and
then
spin
up
and
kind
of
go
from
there.
C
Yeah
I
I
guess
you
can
use
it
for
that
case,
one
of
the
things
that
people
tell
people
say
not
to
do
is
make
these
Market
Services
reliant
on
other
things.
So
if
they,
if
they
fail,
you
know
you
you
will
end
up
with
this
sick
cyclic
dependency.
This
is
not
ready,
so
that's
not
going
to
be
ready,
so
something
else
is
not
going
to
be
ready,
so
yeah
something
you
have
to
watch
out
for,
but
yeah.
That's
that's
a
good
example
as
well.
C
C
A
Know
well
and
I'll
say
real
quick
I
apologize,
I
realized
the
chat
was
disabled
for
some
reason.
So
thank
you
before
for
that
I
went
ahead
and
updated
that
so
now
everybody
should
be
able
to
I
was
wondering
why
it
was
so
quiet.
B
C
So
our
our
pod
is
let's
go
back
to.
We
were
that's
a
very
good
discussion
there,
so
our
father
cubes
to
get
I
have
pod,
which
is
running
I,
can
also
check
the
logs
of
the
Pod
to
see
if
your
application
is
logging,
anything
to
standard
out
or
standard
error,
then
basically
you'll
see
something
that
looks
like
this.
You
know
it's
just
showing
you
Vlog.
That's
also
another
way
of
checking
everything's
good,
and
this
is
what
what
Graham
was
also
talking
about
the
status.
C
That's
running,
that's
the
liveness
probe
telling
us,
and
then
we've
got
this.
This
ready,
State
like
actually
the
Pod
is
ready.
Now
this
this
tells
me
it's
ready,
but
I
need
to
be
able
to
check
what
I've
deployed
is
actually
correct.
But,
as
you
remember
from
this
diagram,
I
can't
really
see
this
thing
from
outside
the
cluster.
Unless
I
deploy
a
service,
I
deploy
an
Ingress
and
that's
what
we're
going
to
do
now,
we'll
deploy
server
send
in
English.
C
But
what,
if,
like
in
a
container
you
just
saw
I,
could
do
this
port
forwarding
and
and
check
it
out,
but
can
we
do
something
like
that?
Pod
and
we
can
in
a
pod,
we
can
port
forward
using
Cube
CTL,
so
we
can
just
at
least
test
if
the
Pod
is
running
correctly,
I
mean
all
the
things
that
we
talked
about
here.
It's
looking
okay,
because
we've
got
the
pod.
That's
running.
I
can
see
some
logs
there's
no
errors,
but
we
can
check
this.
So
I
can
do
like
this.
C
I
can
do
Cube,
CTR
and
I
can
do
something
that
looks
like
this.
This
pod
is
running
on
Port
9898,
so
I
can
do
qctl
port
Dash
forward,
then
what
kind
of
instance
resource
I'm
trying
to
port
forward
so
I'm
going
to
say
pod
and
then
I
can
stick
the
name
of
the
that
resource,
which
is
bot
info
and
then
I'll
pick
a
random
port
on
my
machine.
Let's
pick
8083
and
then
which
Port
this
container
is
running
on.
C
I
know
that
because
when
I
built
I
can
when
the
container
is
built
and
it's
logging
as
well
as
telling
it
it's
actually
listing
on
Port
9998.
So
if
I
run
this
command,
if
I
open
a
browser
and
if
I
go
to
localhost
8083
I
should
see
that
page,
like
you
saw
before
this
is
8082.
This
is
80d.
We
check
that
deployment
in
here.
So
let's
do
this.
Local
Host.
C
Oh
100
yeah
100
yeah,
so
this
is
this
is
basically
we
got
all.
We've
got
the
website,
that's
running
inside
the
Pod.
So
that's
what
we've
confirmed
so
far.
We
we
haven't
gone
all
the
way
yet,
but
that's
we're
just
building
up.
So
that's
confirming
yeah
the
the
Pod
is
running
okay.
Now
what
we're
going
to
do
is
deploy
our
service
because
we
need
to
do
something
like
this.
We
need
to
map
this
up
and
how
do
we
map
it?
C
C
Basically,
this
is
what
we're
looking
at.
We
have
a
pod
and
we
have
a
service
and
the
set
the
Pod
is
running
on
a
container
port
and
the
service
is
running.
Service
has
what's
known
as
a
Target
Port.
We
need
to
match
these
things
and
we
need
to
make
sure
these
two
are
matching
in
our
configuration
yaml
files.
So
let's
go
back
to
our
yaml
file.
This
is
saying:
Port
Target
Port
is
9898,
which
is
correct,
because
that's
what
the
container
Port
was
9898.
C
A
And
one
thing:
if
I
can
real
quick
while
you're
doing
that,
we
had
a
question:
come
in
and
I
I'm
a
little
late
to
it,
but
that
Michael
had
asked
what
role
does
this
the
describe
pod
play
in
your
health
checks.
C
Very
good:
let's
do
that
so
Cube
CTL
get
pods
very
good.
Actually,
so
one
of
the
things
you
can
do
with
this
command
line
tool
is,
you
can
describe
all
kinds
of
resources,
this
crime,
dscr,
ibe,
pod
and
then
the
name
of
the
resource
you
can
even
post
it
with
a
slash.
So
let's,
let's
do
that?
Let's
describe
this
pod
and
let's
see
what
kind
of
information
we
get
if
I
scroll
up
a
little
bit,
it
provides
you
more
information,
more
detail
than
what
you
do
just
keep
still
get
pods.
C
You
got
this
information
in
here
about
like
the
name
which
name
space
is
running
it
container,
ID
and
stuff,
like
that
blah
blah,
and
it's
also
giving
us
some
information
about.
Is
it
ready
or
is
it
not?
But
if
you
go
to
the
bottom,
there's
some.
Sometimes
you
get
this
information
in
here
about
the
events
that
have
happened
so
it
started.
It
pulled
the
cubelet
as
a
component
I
think
that's
a
topic
that
I'll
probably
discuss
the
next
time
around.
C
C
I
can't
remember
exactly
what
kind
of
information
comes
in
here,
but
you
might
see
the
Graham.
You
might
remember
some
yeah.
B
It's
just
different,
so
so
kubernetes
has
an
event:
a
collection
system,
so
there's
events
get
posted
by
all
of
the
components
inside
of
kubernetes
and
all
the
resources.
So
you
can
get
it.
B
You
can
just
grab
all
the
events
from
every
namespace
across
all
the
cluster
or
narrow
it
all
the
way
down
to
what
are
the
events
that
this
is
it
a
pod
you've
looked
at,
but
this
pod
has
has
sent
out
so
this
if
the
Pod
is
sending
out
events,
so
there's
some
standard
events
that
come
out
of
just
the
Pod
spec
which
you'll
see
there.
So
the
cube,
that's
kind
of
started,
container
created
container
and
things
like
that
and
you'll
also
get
something
coming
out
of.
C
B
Coming
out
of,
or
it
it's
pulled
apart,
but
it
can't
start
it
successfully
because
it's
errored
so
you'll
see
things
like
that
in
just
as
events
being
being
pushed
out
by
the
by
the
pod.
C
Which,
maybe
it
doesn't
exist?
Maybe
you
don't
have
a?
Maybe
you
misspell
this
thing
in
here.
Instead
of
plotting
for
wrote,
something
maybe
I
don't
have
the
right
version,
maybe
I
don't
have
access
to
the
repositories
trying
to
pull
it
from.
This
is
coming
from
Docker
up,
which
is
open.
So
that's
fine,
the
way
it
works
in
kubernetes
it
will
try
and
pull
it
and
then
you'll
get
the
circle
image
pull
back
off.
B
So
that's
an
error
that
could
happen
so
if
you've
not
seen
that
before
that,
so
that
image
is
just
the
the
docker,
the
docker
container
image,
that's
in
a
repository
somewhere
and
that's
just
the
address
to
it.
So
it's
yeah
because
it's
defaulting
to
Docker
Hub
I'm,
trying
to
find
that
that
that
you
are
that
address
for
that
Docker
image.
So
yeah.
C
B
Because
it
back
off
right,
so
it'll
try
and
pull
the
image
but
it'll
back
off
for
a
default
time
period
and
then
go
and
try
and
pull
it
again.
Just
in
case
there
was
a
network
issue
or
a
communication
issue,
or
something
so
we'll
keep
on
trying,
and
you
can
configure
that
in
once
you
get
into
the
depths
of
how
your
the
container
orchestration
system
works,
so,
which
is
why
it's
an
image
pullback
off?
It's
not
a
can't.
It
is
a
but
I
can't
yeah
well,.
C
I'm
gonna
try,
okay,
yeah
just
try
again
try
again,
okay,
excellent.
So
so
far
what
we've
got
is:
we've
deployed
a
pod.
We
have
a
service
that
we're
going
to
create
in
a
few
seconds,
and
we
just
need
to
make
sure
these
two
things
match.
There's
another
thing:
we
need
to
make
sure
that
matches
which
is,
if
I
hop
back
in
here.
If
you
look
at
the
service,
so
deployment
a
service
could
point
to
multiple
deployments
or
they
could
point
to
a
bunch
of
PODS
that
are
listening
into
a
deployment.
C
How
does
it
know
which
pod
to
send
the
request
to,
or
which
container
specifically
send
the
request
to
in
here?
As
you
see,
we
don't
really
specify
a
name
of
the
deployment
anywhere.
All
we
do
is
specify
the
name
of
the
service
which,
which
I
call
the
Pod
info.
The
way
it
picks
the
pods
that
it
needs
to
send
information
to
is
using,
what's
known
as
a
selector
and
a
selector,
you
select,
the
label
and
labels
are
defined
in
here
under
this
bit.
Here,
select
a
match
labels
app
pod
info.
C
And
what
are
you
describe
Define
here?
You
have
to
Define
here
as
well,
now,
usually
I
just
copy
and
paste
a
yaml
phone
to
change
the
things
I
need
to
change.
So
that's
what
we
follow,
but
that's
how
it
picks
the
pods.
Now
these
this
label
is
is
a
deployment
label,
but
this
doesn't
have
to
be
the
same.
These
two
have
to
be
the
same,
but
I
just
kept
it
same
for
the
Simplicity
of
it,
but
that's
basically,
two
things
we
need
to
match.
C
B
Just
just
as
a
step
back
from
that
salmon,
so
I
guess
for
the
guys
and
the
guys
and
girls
on
the
call
it's
kind
of
the
so
a
pod
just
will
just
run
in
a
across
all
of
yours.
The
the
worker
nodes
within
your
cluster
doesn't
matter
where
it
is
it'll
run
somewhere.
B
Now
you
can't,
you
don't
know
where
it's
going
to
run.
The
scheduler
for
kubernetes
will
run
your
pod
somewhere
on
any
one
of
the
nodes,
so
the
service
is
a
way
of
decoupling
where
that
pod
is
going
to
run
on
any
one
of
the
servers
on
the
worker
nodes,
but
kubernetes
will
understand
where
it's
put,
that
pod
and
the
service
is
a
a
way
of
addressing.
B
So
you
can
address
the
service
and
the
service
will
know
where
the
pods
are
running
and
the
cluster.
So
you
need
that
service
to
be
able,
because
if
a
pod
disappears
and
moves
on
to
another
node
kubernetes
knows
about
it.
So
so
the
service
will
change
its
load.
Balancing
to
point
to
where
it
knows
the
Pod
has
been
moved
to.
All
you
need
to
know
is
the
service
address
internally?
So
it's
a
way
of
just
decoupling
when
a
pod
disappears
and
moves
to
another
node.
C
C
Perfect
exactly
yeah,
so
we've
got
these
labels
here,
which
is,
which
is
what
we're
trying
to
select
from
now
we'll
create
this
service.
So
let's
do
that.
C
C
Pod
info,
that's
the
one
that
I
created
kubernetes
is
on
service
for
for
doing
its
own
stuff,
that's
already
running!
So
that's
now
created
and
I
I.
We
if
we
tell
it
to
run
on
a
specific
Port,
it
could
be
any
port.
You
can
pick
anything
if
you
like,
so
that's
what
that's
what
it's
running
on
now,
how
do
I
check
if
it's
all
running?
C
Well,
you
don't
remember
I,
did
port
forward
before
I
can
do
that
again,
but
this
time
for
a
service,
port
info
and
I'm
going
to
pick
another
Port
8085
this
time
and
this
time
the
port
of
the
service
itself
is
three
thousand.
So
if
I
run
this,
it
will
do
similar.
So
if
we
go
localhost
8085
and
if
we
still
see
the
page,
that
means
what
we've
done
is
wired
everything
correctly
and
just
just
to
prove
that
not
every
port
is
running
everything
you
can
see.
This
adh6
doesn't
have
anything
is
this
is.
C
C
Again,
that's
important
yeah,
it's
important!
Yes,
so,
basically
just
making
sure
everything
is
not
correct.
So
so
far
what
we
talked
about
is
what
can
go
wrong.
So
the
bits
to
watch
out
for
labels
are
screwing
was
saying
very
important
and
the
port
itself
is
Target.
Port
make
sure
that's
correct
and
you
have
different
types
of
services-
we're
not
going
to
go
into
that
today,
but
you
have
different
types
of
services.
You
can
expose
them
externally
to
how
do
they
work
internally?
That's
another!
C
That's
a
matter
for
another
time,
but
that's
what
this
the
service
is
just
like
an
internal
load
balancer
as
as
green
was
saying.
If
the
pot
goes
missing,
we
have
multiple
replicas,
which
one
do
we
pick?
Let
the
service
decide
abstraction
now
the
thing
is
I
need
to
be
able
to
access
this
outside.
The
cluster
is
all
well
and
good.
We
try
and
do
port
forward,
but
imagine
it's
a
website
that
people
need
to
access.
C
You
can't
give
everybody
Cube
CTL
come
on
and
do
it
and
say
hey
just
port
forward
it.
That's
not
right
because
you
have
to
set
set
everybody
up
on
that,
and
this
is
where
we
use
Ingress
make
Ingress
you
can
think
of
as
an
Ingress
as
an
external
load
balancer.
So
we
send
a
request
to
it
and
then
in
here
we
write
some
rules
using
this
Ingress
yaml
file.
You
can
give
it
any
name.
You
like
the
file
itself,
but
this
is
important
here
kind.
C
Now
you
have
annotations
which
do
like
additional
features
for
for
Ingress,
but
the
more
important
thing
is
the
rules
in
here
I
wrote
this
HTTP
rule.
That
says
for
any
request
that
goes
to
the
root
of
this:
send
it
to
the
service
called
pod
info,
which
is
running
on
Port
3000..
C
So
what
we're
doing
is
is
trying
to
trying
to
match
this
basically
Services
running
on
a
port,
and
then
we
have
an
Ingress
with
service.port
and
we
need
to
match
that
once
that's
matched
and
the
name
of
the
service
to,
of
course,
you
saw
in
the
Yama
file
name
of
the
service.
Again
thanks
for
watch
out
for
if
it's
not
working
check
the
port
check
the
service
Ingress
check.
C
The
port
number
check
the
service
name,
see
if
it's
all
correct,
let's
go,
and
if
you
have
multiple
rules
you
can
write
multiple
rules
in
here
you
can
say
if
somebody
goes
to
forward
slash
login
send
them
to
a
different
service.
If
somebody
goes
for
forward
slash,
you
can
add
multiple
every
time
you
see
a
dash.
That
means
it's
a
list,
so
you
keep
adding
them
in
multiple
multiple
times.
So,
if
I
apply
minus
F
the
ingress.yaml
now
the
way
Ingress
works,
is
it
actually
spins
up
in
the
cluster
itself?
C
It
will
create
another
component.
They'll,
look
after
all
the
requests
that
go
in.
So
you
can't
really
port
forward
in
English,
but
you
can
port
forward
this.
This
Ingress
controller
we're
not
going
to
do
that.
But
if
you
needed
to,
we
can
check
it.
The
way
in
minicube
you
can
access
stuff
is
you
can
get
the
mini
Cube
IP,
that
is
an
IP
locally,
which
is
exposed
and
I
can
access
the
website.
C
So
if
I
type
this
URL-
and
we
see
this
page-
that
means
we're
configured
all
of
this
correctly
and
everybody's
going
to
clap
and
we'll
say,
we've
done
a
deployment
from
top
to
the
bottom
Ingress
service
and
I
put
everything
is
correct,
so
if
it
doesn't
work,
we
can
try
and
figure
out
why
it
didn't
work.
Oh
it
worked.
So
it's
gonna
put
in
for
itself
is
clapping,
which
is
good
like
so
what
we've
done
is
we've
gone
through
and
you've
been
ping
this
or
whatever.
C
B
Off
mute
if
you've
got
a
question
about
anything
but
specifically
have
you
got
any
questions
for
or
maybe
not
anything,
nothing,
personal,
just
mainly
kubernetes,
and
what
we've
just
seen.
D
That's
why
I'm
afraid
of
it?
Hello,
hey
yeah,
so
thanks
for
the
opportunity,
my
name
is
Ibrahim.
D
D
The
first
time
I
put
it
on
the
chart
there
I
actually
interacted
with
the
cube,
was
on
Google
Google
Cloud.
So
it's
actually
a
good
one.
However,
I
came
across
mini
cube
of
all
kinds
yeah
as
a
part
of
I
was
playing
around
with
the
vs
code
and
I
I
saw
mini
cable
kind
on
vs
code.
D
So
if
it's
possible
to
actually
provide
I'm
trying
to
respond
to
graeme's
comment
here,
I
I
wish
you
could
actually
expand
on
it,
which
is
most
preferred,
mini,
Cube
or
kind
or
and
what
is
actually
the
difference.
Yeah.
B
Sure
so
so
they're
they're
both
locally
running
kubernetes
instances.
So
if,
if
using
gcp
or
IKS
or
eks,
that's
kubernetes
running
in
the
cloud
for
you,
so
your
options
are
if
I
want
to
run
something
locally
on
my
laptop,
so
I
can
do
some
just
really
kind
of
quick
Dev
testing
and
doing
something
local.
B
Then
you've
got
a
few
options,
so
you
can't
you
can
actually
just
install
a
kubernetes
cluster
from
Source
into
a
into
if
you're
running
Linux
on
your
laptop
fine,
you
can
run
it
into
there
or
you
can
create
a
VM
with
Linux
on
it
and
run
it
in
there.
But
don't
do
that
because
that's
horrible
and
you'll
be
you're
getting
just
in
a
world
of
pain,
so
the
other
options
are
so
mini.
B
Cube
is,
is
exactly
the
same,
but
it's
been
packaged
up
into
a
VM
and
has
just
a
a
kind
of
control
around
mini
Cube,
where
you
can
just
start
up
minicube
and
it
will
create
a
a
kubernetes
cluster
inside
of
a
VM
I,
think
it
uses
vagrant,
but
it
will
just
give
you
access
to
a
kubernetes
cluster.
Minicube
is
quite
good
that
it
gives
you
additional
extras
on
there,
so
you
can
for
the
Ingress,
for
example,
that
salmon
was
looking
at.
B
You
can
deploy
an
Ingress
controller
inside
of
minicube
using
the
mini
Cube
controls
commands
and
there's
a
there's,
a
ton
of
stuff
that
mini
Cube
will
wrap
outside
give
you
wrapped
outside,
but
that'll
just
run
on
your
laptop,
so
you
can
use
it
and
deploy
things
on
your
laptop
kind
of
similar,
so
kind
will
run
kubernetes.
It
stands
for
kubernetes
in
docker
communities
in
what's
the
N
I
think
it's
K
yeah,
okay!
B
Indeed,
there
you
go
kubernetes
in
Docker,
so
all
you
need
is
a
Docker
Docker
runtime,
so
whether
you
use
Docker
or
whether
he's
pod
ban,
whatever
you
can
run
kind
inside
of
inside
of
that
runtime
and
that
will
spin
up
a
kubernetes
instance
in
inside
as
a
Docker
container
on
your
machine.
So
it's
just
a
great
way
of
of
just
getting
kubernetes
locally.
B
It's
pretty
quick
to
spin
up
and
spin
down
the
drawbacks
of
it
is
that
it
will
consume
a
lot
of
your
resources
on
your
laptop.
So
if
you
try
and
deploy
any
reasonable
size
application
inside
of
kubernetes
in
one
of
these
environments,
it'll
it'll
run
like
a
dog,
and
so
you
just
got
to
be
careful
with
what
you
run
inside
of
it
great
for
exploring
great
for
poking
around,
because
you
can't
can't
do
anything,
you
don't
damage
anything,
but
it's
got
its
limitations.
Does
that
make
sense.
D
Yeah
it
does
and
and
then
it
takes
me
to
my
second
question:
if
you
don't
mind
many
times
a
number
of
environments,
I've
seen
them
use
hybrid
environments.
So
you
have
this
locally
running
locally,
and
sometimes
you
just
want
to
make
it
work
with
with
the
cloud
infrastructure.
Does
this
come
in
handy
when
you're
trying
to
do
that
and
how
easy
is
it
for
you
to
actually
make
it
work
in
a
hardware
setting.
B
So
so,
when
you
work
in
hybrid,
so
there
is
no
direct
link
between
what
you
do
on
your
local
machine
and
what
you
and
moving
that
over
to
a
another
kubernetes
environment,
so
the
the
way
that
people
do
this
is
so
first
of
all,
you've
got
your
Docker
images
and
your
Docker,
so
the
actual
things
you're
running
that
will
be
in
a
registry
somewhere.
So
you,
wherever
you've,
shared
that
Docker
image.
B
If
you're
moving
between
your
local
environment
and
your
Cloud
environment,
they
both
need
to
be
able
to
access
that
container
image.
So
wherever
you're,
building
application
code
and
pushing
it
as
a
Docker
image
that
has
to
be
shared
between
the
both
now
getting
things
deployed
into
that
kubernetes
environment
is
all
about
the
manifests
that
that
salmon
was
just
going
through.
So
the
deployment
the
Pod
spec
Etc.
B
So
if
I've
created
an
a
deployment
manifest
on
my
local
machine
and
tested
it,
which
pulls
in
from
a
container
image
that
I
know
is
shared
with
my
cloud
environment,
then
that's
portable
right!
You
can
Port
the
you
can
move
that
directly
from
what
you've
deployed
on
your
local
machine
into
running
the
same
file
on
your
Cloud
environment.
B
So
that's
the
artifact,
the
kind
of
the
shareable
artifact
that
might
go
into
your
CI
CD
process
or
use
tools
such
as
Helm
or
other
things
that
will
kind
of
build
a
manage
your
kubernetes
deployment.
So
that's
kind
of
where,
where
you
kind
of
move
move
between
the
environments,
you
can
use
the
same
manifest
right.
It'll,
pull
the
same,
pull
the
same
container
image
in
it
or
run
the
same
things
so
that
that's
kind
of
the
shareable
part
of
it.
B
Yeah,
so
if
you're
just
starting
out,
then
probably
avoid
helm
for
your
own
thing
and
use
Helm
to
go
and
grab
you
know,
I
want
to
deploy
a
a
mySQL
database
can
use
a
Helm
chart
and
use
that
to
deploy
your
mySQL
database,
but
for
your
own,
if
you're
starting
out
and
just
building
your
own
app
from
scratch,
just
create
the
templates
create
the
the
the
manifests
of
the
deployment
and
the
service
and
the
Ingress
create
all
that
separately.
B
A
B
I
mean
once
you've
learned
what
the
you
know,
what
a
deployment
looks
like
and
what
the
types
of
data,
what
types
of
information
you
need
to
put
in
there,
the
metadata
you
need,
and
once
you
learn
the
Ingress
and
the
service
and
and
just
getting
used
to
those
artifacts
hey.
Then
it's
easier
to
go
to
helm,
because
you
kind
of
understand
what
you're
you
know
a
bunch
of
it's
just
boilerplate
and
that's
what
Helm's
there
for,
because
you
can
boil
a
plate
most
of
it
and
there's
only
a
few
things
change.
C
Ibrahim
thanks
thanks
for
the
questions,
I've
posted
the
link
for
the
webinar
we
did
last
week.
It
was
around
Helm.
Why
do
we
need
it
and
how
to
get
started
and
how
to
do
it?
That's
that's
I'll,
just
post
it
it's
on
YouTube,
so
you
can
check
it
out.
Whenever
you
have
time.
C
C
A
But
yeah,
and
also
something
to
know
too,
if
you
have
ideas
or
of
topics
you'd
like
to
see,
we
already
had
a
couple
submitted
earlier,
feel
free
to
put
them
in
the
chat,
we're.
We
are
absolutely
open
to
doing
what
you
want
so
giving
you
what
you're
you're
asking
for
so
otherwise
we're
just
going
to
come
up
with
something
each
week.
A
E
All
right,
thank
you
very
much.
James
thank
you
seems
one
Services.
Thank
you.
Everybody
just
trying
to
ask
him
or
someone
for
some
countries.
You
basically
wouldn't
be
able
to
deploy
in
the
cloud
because
of
gdpr
process
of
some
countries
in
Africa.
E
E
B
Yeah
yeah
so
I
think
it's
probably
there's
more
than
Ingress
I
think
it's
it's
the
whole
platform
that
you
need
to
take
care
of,
so
it
kind
of
depends
on
on.
Well,
it
mostly
depends
on
where
your
data
sits
so
where,
wherever
I'm
processing
data,
if
I'm
in
a
region
where
I'm
not
allowed
to
go
to
the
any
of
the
public
clouds
and
you've
got
to
be
looking
at,
you
know
what
can
I
do
on-prem?
B
What
can
I
do
in
a
self-managed
or
finding
a
hosting
provider
in
your
region
that
keeps
the
data
in
a
in
somewhere
that
within
the
regulations
of
that
environment
that
country,
so
it's
it's
not
just
Ingress
I-
think
you'll
need
to
deploy
a
kubernetes
on-prem
solution
so
and
there's
a
few
out
there.
So
you
know
you
can
look
at
things
like
openshift
and
tanzu
or
or
just
do
it
yourself,
kubernetes.
You
can
download
and
install
and
run
and
manage
kubernetes
yourself,
although
why
anyone
would
want
to
do
that,
I
don't
know.
C
B
You
know
some
sometimes
it's
you
know
actually
now,
there's
no
reason
why
I'd
want
to
run
your
own
build
your
own
kubernetes,
because
yeah
people,
like
you,
know,
red
hat
and
openshift
and
VMware
and
tanzu
and
there's
a
bunch
of
others.
That
kind
of
add
a
whole
heap
of
value
and
stop
you
from
stop
even
creating
a
mess
of
your
of
your
own
DIY
kubernetes.
But
it's
it's
complex
right,
so
you've
you've
got
to
understand
what
a
where
you've
got
to
say
what
your
data
is
and
where
your
data
processing
sits.
C
Think
just
to
just
to
add
to
that
specifically
around
Ingress
everything
that
green
says
is
absolutely
correct.
You
can
still
deploy
your
Ingress
controller
on
there.
The
the
way
Ingress
actually
works
is
you
have
you
have
an
Ingress
pod?
Let's
say
it's
nginx
and
you
usually
have
a
service
that
you
can
expose
externally,
so
it'll
be
something
like
node
Port,
which
just
opens
a
port
on
the
Node
or
load
balancer
service
type
load
balancer,
and
if
you
do
a
service
type
load
balancer
on
the
cloud
it
actually
Provisions.
C
A
real
load,
balancer
in
front
of
the
machine
and
wire,
is
the
load
balancer
to
the
Ingress
pod.
If
you're
on
premises,
how
do
you
do
that?
Well,
so
you
can
have
like
you
might
have
come
across
this
project
called
metal
lb
that,
if
you're
doing,
if
you're
on
premises,
you
can
because
what
you
need
is
a
service
type
load,
balancer
or
node
Port
this
this
project
called
metal
lb,
share
the
link
in
a
second
that
allows
you
to
have
the
service
type
node
portal
load
balancer
on
premises,
and
that
probably
saw
your
specific
problem.
C
No
worries,
hopefully
I'll
share
in
the
right
place.
Let's
try
again.
This
is
all
right.
Everyone,
everyone
take
her.
That's
that's
mental
op
check
out
that
he's
loaded,
his
service
type
load
balancer.
In
this
case,
you
just
picked
that
yeah.
B
So
so
those
Ingress
controllers,
like
the
metal,
lb
or
the
nginx
or
traffic
or
any
any
of
any
of
the
kind
of
the
the
load
balances,
are
really
just
load
balances,
right,
they're
just
deployed
into
or
not
a
kubernetes
cluster
you
can
have,
but
you
can
have
a
physical
Hardware
load
balancer.
If,
if
you
want
it's
kind
of,
you
can
configure
that
an
F5
load
balancer
which
just
sits
in
front
of
your
kubernetes
cluster
and
bounces
across
all.
It
knows
in
in
the
services
at
the
Ingress.
So
as
because.
A
B
C
E
From
some
of
the
training
that
I've
seen
basically
deal
tell
me
wary
of
the
load
balances
that
sit
on
the
cloud
that
you
pay
extra
costs
for
for
them.
So
how
does
the
English
play
in
public
Cloud?
Do
they
provide
an
Ingress
there
or
what
speeds
can
you
pay
extra
cost
for
that
or
you
can
deploy
your
Ingress
in
the
public
Cloud
also.
C
So
you
can
so,
as
basically,
you
have
to
the
Ingress
control
itself
runs
us
two
parts.
As
as
a
controller
which
is
running
as
a
pod,
you
still
have
to
expose
it
outside
of
the
cluster
that
bit
needs
to
be
exposed.
You
usually
end
up
with
just
one
load
balancer
and
you
have
to
pay
for
it.
It
is
what
you
got
to
pay
for
that
load,
balancer,
and
then
you
can
write
all
your
rules
inside
it,
so
you
have
to
pay
for
one
loadment.
C
What
you
don't
want
to
do
is
imagine
you
have
your
services,
you
don't
want
to
expose
every
service
with
its
own
service
type
of
node
load
balancer,
because
you'll
end
up
with
like
50
load
balances,
and
it's
just
useless.
That's
why
you
want
to
expose
them
through
in
Ingress
itself.
So,
yes,
you
you
can
so
when
I
was
doing
mini,
Cube
I
actually
have
to
run
this
command
where
I
have
to
install
Ingress
in
my
cluster,
because
Ingress
doesn't
come
by
default.
C
So
you
pick
your
type
and
you
can
configure
it
and
when
you
configure
it
it'll
spin
up
something:
a
load
balancer
that
will
make
sure
that
when
the
requests
are
coming
from
outside
of
the
cluster,
they
can
go
to
the
Ingress
pod
and
then
the
Ingress
pod
looks
after
what,
where
the
request
needs
to
go
next,
what
needs
to
happen?
Does
that
answer
your
question.
E
C
Place
yeah,
so
it
depends
yeah.
You
have
to
choose
when
you,
when
you're
deploying
it
you
choose
which
component
you're
going
to
deploy
and
in
all
the
cloud
providers.
You
can
pick
your
own
Ingress
controller,
because
it's
just
the
Pod
and
you
can
deploy
multiple
Ingress
controllers
in
your
cluster.
It
doesn't
doesn't
matter
it's
like
any
other
deployment.
C
So,
yes,
you
can
deploy
anything
you
like
inside
some
Cloud
providers
got
their
own,
so,
for
example,
in
Azure
you
can
use
their
application
Gateway
as
the
Ingress
control,
as
as
an
Ingress
controller
and
that
works
but
yeah
you
can
deploy
whatever
you
like.
It's
just
a
normal
deployment
at
the
end
of
the
day.
That's
what
it
is
you
can
deploy
what
you
want
in
the
cloud.
E
C
Okay,
I
think
I'm
gonna
share
one
more
thing
around
this
debugging
thing,
which
was
there's
a
lot
more.
That
can
go
wrong
and
then
maybe
we
can.
We
can
start
wrapping
up
on
that
and
our
friends
at
learnkates
have
put
this
block
together.
It's
like
a
kubernetes
training
thing
and
I'll
show
you
this.
So
you
can
check
this
PDF
I'll.
Actually,
let's
open
it
in
PNG.
C
So
when
we
were
doing
a
deployment,
you
know
you,
you
saw,
am
I
sharing
the
right
screen
by
the
way,
yeah
yeah
perfect
excellent.
So
this
is
also
going
to
walk
you
through
some
of
the
steps
that
you
have
to
do
when
you
create
a
deployment
and
when
things
go
wrong
check,
the
pods
are
running.
Why
they're
not
running,
maybe
your
cluster
size
is
too
small.
That's
a
problem
as
well.
C
You
run
out
of
resources,
and
then
you
know
this
is
one
of
the
tips
tricks
that
we're
doing
check
do
portfolio
check
if
the
Pod
is
running
check.
If
the
services
running-
and
it's
going
to
take
you
through
a
lot
of
stuff
in
here
and
explain
a
number
of
things
and
check
if
the
services
running
check,
if
the
controller
is
running,
but
some
time
is
going
to
come
around
and
say
just
have
a
look
at
stack
Overflow.
If
nothing
works.
C
Me
just
to
check
the
check
stack
of
the
copy
and
paste
in
there
and
find
out
what's
wrong,
but
definitely
check
this
blog
out.
It's
quite
useful.
I
know
people
have
printed
this
and
put
it
on
on
the
walls
and
stuff
to
debug
this
when
you're
starting
up
new.
This
is
this
is
very
useful,
useful
resource
to
use
on
I'm
going
to
put
in
chat
again
just
make
sure
I
do
it
in
the
right
place.
B
B
If
it
looks
like
that,
go
and
check
if
I've
got
a
replica
control
controller,
that's
running,
go
and
check
the
replica
controller,
get
the
events
from
there
describe
the
things
I'm
looking
at,
and
that
will
give
you
any
of
those
things
will
give
you
a
good
clue.
You
know
if,
if
it
can't
be
scheduled
onto
a
node
because
of
the
node
sizes,
then
it'll
tell
you
in
one
of
those
things.
B
So
you'll
see
it
in
the
events
that
the
scheduler
is
having
a
problem,
scheduling
the
Pod
to
be
to
be
running
on
any
of
the
nodes,
so
that
that
workflow
is
a
great
useful
tool
to
use
and
I've
seen
it
and
used
it.
And
it's
and
it's
kind
of
you
just
get
into
a
habit
of
understanding
all
the
moving
pieces.
B
Once
you
start
getting
lots
of
moving
pieces
in
terms
of
applications
deployed
with
you
know,
maybe
with
different
networking
and
different
storage,
persistent
volumes-
and
you
know
what
there's
a
hundred
and
one
things
that
could
go
wrong
or
a
thousand
one
things
that
could
go
wrong,
so
it
starts
getting
really
hairy.
Once
you
get
down
to
the
levels
of
actually
I've
checked.
All
these
things,
I
still
don't
understand.
Why
why
my
pod
is
is
not
running
and
that's
for
me.
B
That's
the
difficult
thing
in
kubernetes
it's
when
I've
got
a
really
complex
deployment,
the
it's
exponentially
large,
the
things
that
can
go
wrong
and
I
need
to
check
on,
because
there's
50
things
to
check
with
just
one
pod
running
in
a
deployment
so
use
that
workflow,
it's
it'll
it'll
help
you
dive
into
and
diagnose
the
problems,
pretty
pretty
well.
B
There's
some
other
stuff,
actually
just
just
as
we've
got
time
that
it's
probably
worth
looking
at,
maybe
not
for
this.
This
this
q,
a
but
looking
at
things
on
so
quite
often
you'll
you'll
deploy
a
container
image
that
just
doesn't
start,
which
is
nothing
to
do
with
kubernetes
configuration
it's
something
to
do.
You
got
something
wrong
in
your
container
that
doesn't
start,
it
might
run
locally
and
you
go
hey.
It
runs
locally
put
into
kubernetes
hey.
Why
isn't
it
working
with.
B
Yeah
I
worked
on
my
computer
and
doing
things
like
you.
There
are
other
tools
you
can
kubernetes
tools,
you
can
use
that
will
attach
to
a
running
container,
so
you
can
take
a
look
at
it
and
debug
it
before
it
explodes
and
isn't
working
on
the
machine
as
a
probably
in
on
the
kubernetes
docs.
It
talks
about
it
so
kind
of
in-containers
and
debug
containers
which
I'll
pull
the
link
out.
B
A
B
Yeah
so
so
yeah
go
and
take
a
look
at
this.
It's
kind
of
debug
running
pods,
so
it's
got
a
lot
of
things.
We've
talked
about
kind
of
this
is
the
this
is
the
one
which
is
always
kind
of
gets
me
as
difficult
debugging
kind
of
PODS
they're
in
pending
state
right,
they've
done
something
they're,
not
quite
ready.
State
is
impending,
and
if
you
scroll
down
to,
we
didn't
even
look
at
this
kind
of
debugging
with
container
exec.
B
So
if
your
container
is
running
you
can
you
can
get
into
the
runtime
of
that
pod
and
kind
of
have
a
poke
around
to
figure
out
what's
wrong,
and
these
things
here
are
quite
cool,
so
using
an
inferior,
debug
container.
B
So
if
I've
got
a
container
that
doesn't
have
a
anything
that
I
can
attach
to,
then
I
can
use
an
ephemeral
container.
That's
that
I
can
attach
with
that
running
container,
which
I'll
be
able
to
then
kind
of
use
to
poke
into
because
it's
sharing
the
process
so
go
and
take
a
look
at
some
of
these
Pages
because
there's
some
really
interesting
things
in
here
as
well,
so
I'll
post
that.
B
B
A
A
Well
with
that,
we
we
really
appreciate
everyone
joining
us
today
for
our
second
live
q,
a
again
we'll
be
doing
these
weekly
and
I
will
post
them
on
our
our
usual
Outlets
via
email
and
socials,
so
feel
free
to
follow
us
on
LinkedIn
and
YouTube
and
Snapchat,
probably,
and
all
the
other
things
that
we
do
probably.
B
A
But
but
yeah
that's
interesting:
we
we
actually,
we
do
have
a
a
question
came
in.
Do
we
have
a
Discord
Channel?
We
do
actually
have
a
community
slack
that
is
out
there.
If
you
pop
two
I
will
get
the
link
to
it
right
now.
A
A
So
if
you
feel
free
to
pop
in
there
join
that
all
of
us
sit
in
there,
so
you
can
feel
free
to
ask
any
questions
in
there
and
and
all
of
us
see
it
and
we'll
be
more
than
happy
to
respond.
Foreign.