►
From YouTube: Certs Magic with Saiyam and Rawkode (Episode 6)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
Hello,
everyone
and
welcome
to
cloud
native
tv
and
the
search
magic
show
so
just
before
starting
this
is
an
official
live
stream
of
cncf
and,
as
such
is
subject
to
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
A
So
basically,
please
be
respectful
of
all
your
fellow
participants
and
presenters,
so
cloud
native
tv
runs
amazing
shows
so
make
sure
you
follow
the
cloudnative.tv
twitch
channel,
where
you
are
seeing
the
live
stream,
and
this
is
the
search
magic
show
where
we
talk
all
about
kubernetes
and
related
certifications,
and
this
is
the
sixth
episode
in
the
series.
Until
now,
we
have,
you
know,
covered
a
lot
from
the
curriculum
you
know
from
why
certifications
are
important.
A
What
are
they
then?
The
installation
of
the
cluster
using
cryo?
Then
what
are
deployments?
Pods
objects,
different
set
of
objects,
then.
Obviously
the
scheduling
things
how
you
can
schedule
using
node
name,
node,
selector,
taints
and
tolerations
services
and
ingress
how
they
work-
and
today
we'll
be
talking
about
another,
very
interesting
section
from
the
certification,
very
big
one.
A
That
plays
a
very
important
role
in
the
exam
as
well,
which
is
kubernetes
troubleshooting
and
obviously
it
plays
if
you
go
by
the
curriculum,
so
it
it
is
around
30
from
the
the
questions
that
comes
in
the
exam.
So
30
percent
is
big
chunk
of
the
exam,
and
so
you
should
be
knowing
you
know
in
and
out
of
troubleshooting
and
who
better
than
you
are
seeing
on
the
screen.
A
Can
this
you
know
tell
you
about
troubleshooting,
so
very
glad
that
today,
I'm
joined
by
my
very,
very,
very
good
friend,
david
aka
raw
code.
Obviously,
if
you
know
the
cloud
native
ecosystem
cloud
native
world
cncf
about
live
streaming,
then
you
have
probably
heard
of
david
and
his
youtube
channel.
A
So
there
is
very
interesting
series
that
you
know
raw
code
does
which
obviously
he'll
explain
much
better,
but
that
has
given
not
only
him
but
all
of
us
as
well
the
great
troubleshooting
skills
for
humanities.
I
have
been
on
the
show
a
couple
of
times,
so
I
know
like
you
know
what
it
takes
and
how
things
go,
how
the
flow
goes
and
actually
it
gives
you
the
exam
feeling
when
you
are
on
the
show,
because
you
you
have
to
you,
know
kind
of
fix
the
clusters
and
they
are
broken.
A
A
So
david
welcome
to
the
stream
welcome
to
cncf
search
magic
show
on
cloud
native
tv.
I
know
you
also
run
a
show,
so
we
already
know
everything,
but
please
introduce
yourself,
and
you
know
your
show,
the
clustered
or
to
the
community.
B
All
right
well,
thank
you
very
much
siam,
it's
an
absolute
pleasure
to
be
here.
I've
already
had
to
do
some
debugging
and
fix
my
name
live.
I
keep
forgetting
that.
I
changed
my
name
recently
but
yeah
as
sam
said.
I'm
david
you'll
know
me
across
the
internet
as
raw
code.
I
am
cka
ckad,
not
yet
cks
certified
and,
as
sam
said,
I
have
a
show
called
clustered,
which
is
super
super
good
fun.
It's
a
show
that
will
help
you
learn
how
to
debug
and
troubleshoot
the
worst
worst
problems
in
the
kubernetes
space.
A
All
right
awesome,
so
what
do
we
have
today
for
the
for
the
community
with
respect
to
the
troubleshooting
that
they
can?
You
know,
learn
quite
a
few
things.
B
Yeah,
so
we'll
just
kind
of
you
know,
there's
a
few
things
here
in
the
curriculum
document
that
we
have
shared
on
the
screen
and
just
a
few
things
that
everyone
really
needs
to
be
familiar
with.
I
think
it's
important
to
highlight-
and
I'm
sure
siam's
covered
this
before
on
previous
episodes,
but
you
know
the
cka
exam
is
about
the
administration
and
operability
of
a
kubernetes
cluster
rather
than
you
know,
deploying
and
working
with
kubernetes.
B
So
you
really
do
need
to
know
how
to
debug
and
understand
all
of
the
control,
plane,
components
and
we're
going
to
be
taking
a
look
at
that
today.
So
you
can
see
from
this
list,
which
is
not
exhaustive,
but
it's
quite
you
know
it
covers.
Most
of
the
things
is
that
we've
got
to
be
able
to
evaluate
cluster
and
node
logging.
B
We
want
to
understand
how
to
monitor
the
applications.
You
definitely
need
to
understand
container
logging
and
my
favorite
parts,
troubleshooting
application
failures,
troubleshooting
cluster
component
failures
and
troubleshooting
network,
and
these
are
things
that
really
you
can
read
all
you
want,
but
the
best
way
to
learn
these
things
is
to
get
hands-on.
Kick
the
tires
play
with
all
the
components
and
fix
some
real
world
issues.
B
Nothing
like
a
kubernetes
cluster
on
fire
to
make
you
learn
things
a
little
bit
quicker
and
that's
what
we've
we've
we've
got
for
today.
I've
gone
ahead
and
prepared
two
kubernetes
clusters.
One
of
them
is
healthy
and
siam,
and
I
will
go
through
it.
Take
a
look
at
all.
The
components
have
a
bit
of
a
conversation
and
then
we'll
pull
up
the
the
broken
cluster
and
we'll
see
if
we
can
work
through
it
issue
by
issue
and
feel
free
to
throw
your
ideas
into
the
chat.
A
Yep,
that
sounds
fun
also,
if
you
make
the
stream
kind
of
interactive-
and
you
know
you
keep
on
suggesting
some
of
your
cool
ideas
on
how
to
solve
a
particular
issue
that
we
are
on,
then
we
also
have
two
coupon
giveaways,
which
is
a
50
off
on
your
certification
exams,
which
is
a
good
deal
so
make
sure
you
are,
you
know
just
chatting
and
just
making
it
interactive,
so
that's
pretty
much
it
and
we'll.
In
the
end,
just
pick
two
winners
randomly.
B
All
right,
I
love
that
first
comment:
it's
going
to
be
dns,
it's
not
dns!
Today
I
can
assure
you.
I
broke
the
thing
I
know,
although
that
doesn't
mean
that
we
won't
encounter
a
real
issue
and
dns
does
cause
problems,
but,
let's
hope
not
all
right.
So
as
part
of
the
I'm
using
my
actual
clustered
automation
for
today.
A
Yeah,
I
think
you
can
increase
the
font
a
bit
all
right.
B
Let
me
just
reload
the
page,
so
the
scroll
thing
goes
away.
Okay,
so
okay,
we've
got
our
version
here
and
you
can
see
our
client
is
122,
but
we
did
not
get
a
version
from
the
server
now.
I
know
the
server
is
online.
The
thing
that
is
missing
is
that
we
need
a
cube
config,
and
we
can
do
that
through
here
now.
B
There's
a
few
assumptions
here
that
are
being
made,
but
I
think
it's
safe
to
say
that
these
days
most
people
are
working
with
cube
admin
clusters,
which
means
that
cube
admin
is
going
to
provision
you
in
admin.conf,
inside
of
the
slash,
etc,
slash
kubernetes
directory,
and
we
can
run
our
version
again
and
you'll
see.
Now
we
actually
get
a
client
version
and
our
server
version
back
all
right.
Something
else
you
should
probably
be
familiar
with.
A
Yup
and
the
cube
config
context
is
really
plays
a
very
important
role
when
you
are
in
the
certification
exam,
because
different
questions
are
based
in
different
contexts,
so
make
sure
you
are
always
switching
the
con
context
before
attempting
any
question.
B
Yeah
great
advice
so
because
we've
exported
our
cube,
config,
environment
variable.
We
now
can
run
current
context
and
in
fact
we
could
just
run
config
view,
and
we
can
see
all
the
details
of
the
clusters
that
we
have
access
to
important.
To
remember
that
a
cube
config
can
have
multiple
clusters,
users
et
cetera,
defined
inside
of
it
and.
B
B
Now
the
reason
that
you
can
rely
on
this
is
one.
The
cubelet
is
responsible
for
asking
a
container
runtime
to
run
your
containers,
so
it's
unlikely
that
your
cubelet
itself
will
run
inside
of
a
container,
although
not
impossible,
of
course,
and
because
it's
a
cube
admin
cluster.
What
we're
going
to
see
is
that
all
of
our
other
control,
plane
components
are
started
by
the
cubelet
via
something
called
a
static
pod
which
we'll
talk
about
again
in
just
a
second
but
one
of
the
like.
B
If
you're
ever
running
into
any
problems
on
a
kubernetes
cluster
system
control
status,
cubelet
is
your
friend.
You
want
to
be
able
to
make
sure
that
it
is
active
and
running.
You
don't
want
to
see
this
with
any
sort
of
restarts
or
inactive
line.
That
would
typically
be
bad
and
we
can
see
we
get
some
log
information
here
and
the
status
command
is
not
the
best
way
to
work
with
the
logs
in
your
cluster.
B
Thank
you,
fresbo
I'll,
pull
that
back
a
little
bit
and
we
can
see
here
that
general
control
dash
flu
will
allow
us
to
pull
us
out
our
logs
from
the
cubelet.
This
looks
pretty
healthy.
I'm
not
worried
about
any
of
the
errors
that
we
see
here,
and
this
is
our
healthy
cluster.
So
I'm
pretty
confident.
Excuse
me.
B
B
This
is
where
all
of
the
static
manifest
live
by
static,
manifest.
What
we
mean
is
something
that
the
cubelet
is
going
to
be
responsible
for,
starting
when
it
starts
so
you'll
see
all
the
other
control
plane.
Components
are
here:
we've
got
ncd,
we've
got
the
api
server,
we
have
the
controller
manager
and
we
have
the
scheduler
and
I'm
also
running
a
cube
fit
here,
which
I
need
for
bare
metal
ingress.
B
Now
the
cube
system,
namespace,
is
where
all
of
these
static
manifests
will
live,
and
we
can
see
that
everything
here
is
running
now.
There's
some
little
synthetic
or
weird
things,
you'll
see
across
the
documentation
and
error
messages,
particularly
if
you
start
to
look
inside
of
the
cubelet
logs
when
we
refer
to
a
static
manifest.
That
is
the
yaml
file
that
lives
in
the
static
manifest
directory,
and
you
may
also
hear
something
called
of
a
mirror
pod.
B
B
Okay,
we're
going
to
look
at
a
couple
more
tools
before
we
dive
over
to
the
broken
cluster.
Now
I
said
earlier,
I
guess
it's
a
matter
where
I
have
a
direct
revise.
B
B
You
should
confuse
with
kubernetes
namespaces,
which
I
know
can
get
a
little
bit
weird,
but
we
can
actually
say:
okay,
so
canines,
okay,
it's
the
io
images
list,
and
these
are
all
the
images
that
it
has
pulled
inside
of
this
namespace
for
running
inside
of
my
cluster.
B
So
this
is
something
I
see:
tripping
people
up,
often
and
they're,
on
ctr
images
and
they're
like
oh,
my
cluster
is
running
more
images
than
this.
Where
are
they
and
it's
just
that
namespace
toggle
and
you
can
actually
list
the
name
species
as
well
with
ctr
and
sls,
and
now
ctr
is
a
bit
more
low
level.
You
may
want
to
work
with
something
that
is
slightly
more
aware
of
kubernetes,
and
for
that
we
have
cry
control.
B
A
Yep,
so
the
the
cry,
control
and
providing
the
runtime
endpoint
is
very
important,
and
I
think
you
should
just
you
know:
keep
it
handy
somewhere.
Just
you
know
just
so.
You
can
directly
copy
paste
it
and
you
know
run
that,
and
there
was
a
question
like
you
know
why
it
is
important
to
go
to
the
file
system
rather
than
using
the
cube
cd
lock.
A
So
there
might
be
times
when,
when
the
kubernetes
cube
city
will
get
poor
skips,
it
will
get
nodes
itself,
won't
work,
so
control
plane
is
down,
and
things
like
that.
So
for
that
you
have
nowhere
to
debug,
so
general
ctl
logs
or
you
know
that
they
can
give
you
the
first
level
of
information
and
then
you
can
move
to
the
file
system,
which
is
the
edc
manifest
and
the
wear
log
cubelet.
Those
are
some
of
the
you
know
kind
of
directories
where
you
can
see
the
containers.
You
can
see
what
all
things
are
happening.
A
Obviously
there
are
a
lot
more
things,
not
lot
more
nasty
things
that
attacker
can
do
that.
That
can
be
done,
but
generally
these
would
be
some
of
the
initial
places
that
you
will
be
looking
at
and
rightly
said,
like
cubelet
is
not
something
that
would
run
the
container.
So
that
is
very
important.
You
should
take
care
of
that.
A
The
cubelet
is
sending
the
request
to
the
container
runtime
and
container
runtime
in
you
know,
and
in
turn
is
running
your
containers,
and
even
in
that,
if
it's
container
d,
then
container
d
itself
do
not
run
it's
actually
the
run
c
behind
it,
which
actually
runs
the
container.
So
it's
it's.
You
know
different
levels
which
are
there,
so
that
is
also
a
kind
of
a
good
to
know
thing
for,
for
you.
B
Yeah
definitely
I'll
add
one
more
thing
to
that,
although
siam
smashed,
it
nailed
everything
there
but
yeah,
you
may
not
have
an
api
server
and
no,
where
they
live
on
disk,
is
critical.
Also
through
the
logs
or
cube
control
logs
command.
You
can
access
the
current
logs
or
the
previous
logs,
but
you
can't
go
back
any
further
than
that.
B
There
we
go
that's
better,
I
was
getting
worried.
I
broke
teleport,
okay,
so
I'll
zoom
in
one
more
refresh
the
page
just
to
get
rid
of
that
bug.
Okay,
so
we
have
a
control
plane.
We
hope
I'm
going
to
run
version
and
we
can
see.
We've
got
our
client
version,
but
we
don't
have
our
server
version.
So
we
know
how
to
fix
this
right.
We're
going
to
export
our.
B
All
right,
so
it
still
failed
and
we
got
an
error
message
that
the
connection
to
our
server
now
was
really
important.
In
these
messages.
The
first
one
said:
localhost
8080..
This
is
the
default.
This
means
that
you
don't
have
a
cubeconfig
configured.
B
This
is
an
ip
address
and
a
port
that's
an
indicator
that
we,
we
do
have
some
kubernetes
context,
but
we're
not
able
to
speak
to
the
server.
So
that
means
that
something
is
definitely
wrong
here.
Russ
is
asking:
if
I'll
stick
to
my
own
rules,
you
I
guarantee
you.
I
did
not
use
any
unicode
breaks,
ebpf
or
any
naughty
things
I
do
not
like.
B
A
Yep,
so
please
post
the
kind
of
next
steps
that
you
think
should
be.
You
know
done
in
the
chat
yeah.
A
Yeah
I
had
just
I
don't
even
remember
what
I
have
that's
that's
the
toughest
question
you
can
ask
someone.
B
B
B
It
worked
there,
we
go
well
things,
look
good!
No,
this
is
start
yeah.
I
just
read
your
remainder,
so
I
going
straight
forward
with
the
pods,
and
this
is
I'm
going
to
play
standard
clustered
rules
today.
So
let
me
give
you
a
bit
of
context
before
we
move
on.
We
have
this
well
we're
supposed
to
have
a
deployment
called
clustered
with
a
pod
called
clustered
shown
up
here,
and
we
should
be
able
to
browse
to
it
and
that
is
currently
unavailable.
B
We
can
also
see
that
our
api
server
is
actually
broken
and
the
reason
I
did
this
one
other
than
it
just
been
funny
is
that
it
kind
of
highlights
that
static,
pod,
manifest
metapod
semantic
and
it's
not
really
a
pod.
So
right
now
this
says
container
creating,
but
we
are
creating
the
api
server
right.
So
just
be
careful
of
that,
you
can
see
that
we
don't
have
a
scheduler
and
schedulers
are
not
important.
I
don't
really
care
so
we'll
see
if
we
need
it,
but
we
want
to
get
our
application
running
okay.
A
Yep,
let's
we
can
go
to
the
journal
ctl
and
see
what
is
happening
with
the
cubelet
and
and
stuff.
B
Yes,
we
have
a
lot
of
error
messages
here
and
one
particularly
important
one.
So
I
don't
know
if
you've
seen
that,
but.
B
Okay,
we
have
an
error
message
that
we
have
a
admission
controller,
denying
all
modifications
to
our
cluster,
and
this
is
one
of
my
favorite
examples
of
something
that
we
have
to
talk
about
when
it
comes
to
debugging
and
the
kubernetes
api
is,
there
are
two
different
types
of
admission
controllers.
Most
people
are
really
familiar
with
dynamic
admission
controllers,
which
are
validating
web
configurations
and
mutating
weapon
configurations,
but
the
api
server
historically
prior
to
dynamic
mission
controllers
did
everything
through
built-in
components
that
were
compiled
into
the
api
server
binary.
B
So
we've
got
that
to
fix,
so
we
need
to
check
out
the
static
manifest
for
our
api
server.
Where
do
those
live
against?
I
am.
B
So
sorry,
I'm
just
having
some
fun
with
this
session.
I
hope
you
don't
mind,
but
we
need
to
it's
important
to
understand
how
all
these
components
are
configured
as
well,
so
the
static
manifest
directory
is
consumed
by
the
cubelet.
So
we
need
to
understand
how
the
cubelet
is
configured
to,
in
fact,
rust
or
straight
in
there,
with
the
the
chat
as
well
yeah.
B
B
B
B
So
what
monster
changed
that
I
know
russell
so
we're
going
to
remove
it.
We're
going
to
save
this
now
when
you
make
modifications
to
a
static,
manifest
directory
or
any
of
the
ammos
in
there,
the
cubelet
will
automatically
detect
that
change
and
over
the
course
of
around
30
seconds,
we'll
remove
the
old
container
and
start
in
your
container.
B
B
B
B
B
All
right,
nobody,
okay,
so
we're
gonna
plot
on
and
fix
this
one
yeah,
no
also
tech
talking.
Thank
you
all.
So
we're
going
to
pop
open.
Where
does
this
one
live
again,
the
controller
manager?
So
let's
talk
about
the
responsibilities
of
the
control
pin
components.
Here
we
have
the
cubelet,
which
is
started
as
a
systemd
service
and
is
responsible
for
sending
messages
to
the
container
runtime
interface.
To
start
all
the
containers
that
we
need.
B
We
have
the
api
server,
which
is
essential
and
accrued
interface
in
front
of
our
lcd
backing
store,
which
stores
all
of
the
events
and
requests,
etc
that
come
into
our
cluster.
We
have
the
scheduler,
which
is
broken,
which
has
some
responsibilities
we
may
talk
about
if
we
fix
it
and
we
have
the
controller
manager,
which
is
this
super
controller
of
controllers
of
controllers,
and
did
you
know
you
can
disable
any
of
the
controllers
within
the
controller
manager
through
the
configuration?
B
B
So
we
can
remove
that
which
is
going
to
bring
back
our
namespace
and
our
replica
set
controller,
and
in
fact
I
might
leave
the
namespace
button
and
because
it's
a
nice
visual
way
for
me
to
create
a
namespace
and
you'll,
see
that
nothing
actually
happens.
So
we'll
save
that
we'll
run
ps
and
maybe
I'll
get
lucky
to
catch
this
one
all
right
see.
We
have
no
controller
manager
right
now,
so
the
kubelet
has
detected
that
change.
It
has
removed
the
whole
process
and
there
we
go
third
time
lucky.
B
B
A
No,
no,
I
said
the
result
yeah.
I
think
there
was
yeah.
There
was
nothing
in
that
describe,
but
I
think
if
you
would
have
done
the
ps
aux
on
the
controller,
then
probably
you
might
have
seen
the
the
admissions
over
there
like
the
hyphen,
minus
namespace
or
hyphen
namespace
over
there
and
the
replica
set
over
there,
and
you
could
have
got
to
this
particular
point
which
the
editing
of
the
controller
yaml
file.
B
A
B
You
really
you
just
have
to
get
familiar
with
the
static
cloud,
manifest
know
what
to
expect,
and
there
there
are
a
few
red
headings
which
I'll
you
know.
We
can
point
out
one
of
them.
You
know
there's
stuff
in
here.
That
looks
weird.
That
rarely
is
weird
you
know
so
remember
in
the
bind
address
we
want
that
to
run
on
0.0.0
or
maybe
the
local
ipv4
address,
understanding
which
authorization
modes
are
defined
by
default.
B
You
know
you're
going
to
see
some
insecure
port
configurations,
not
in
this
one
and
maybe
the
controller.
No,
maybe
the
queue
yeah.
You
can
see
a
port
zero
here.
So
there's
just
some
of
these
things
that
you
pick
up
over
time
and
you
think,
okay,
that
looks
weird,
but
I
know
it's
completely
normal
and
I
hope
my
dog
is
not
deafening.
You
at
all.
B
All
right,
okay,
so
we
do
have
a
scheduler
bug,
so
I'm
I'm
on
purposely
not
going
to
fix
the
scheduler
and
I'll
show
you
what
the
problem
is.
It's
trying
to
run
scheduler
124.
B
there
is
no
kubernetes
124
yet
so
it's
just
not
going
to
work,
however,
to
demonstrate
what
the
responsibilities
of
the
scheduler
are
they're
almost
all
involved.
There's
a
scheduler
really
doesn't
do
anything.
It
listens
for
pods
being
created
and
it
adds
one
field
to
the
spec,
which
is
the
node
that
should
run
on
now.
It
has
some
abilities
to
understand
what's
running
on
the
nodes,
what
constraints
need
to
be
applied?
B
B
B
You'll
see
that
our
custard
thing
has
now
been
scheduled,
so
the
scheduler
is
important.
It
does
work
out
the
best
place
to
run,
especially
with
tents,
celebrations
constraints
all
of
these
things,
but
sometimes
it's
also
important
to
know
that
you
can
break
the
glass
when
things
go,
really
really
wrong
and
schedule
that
workload
that
you
need
to
as
quickly
as
possible.
B
B
B
All
right
so
this
I
think
it's
important
to
understand
what
teleport
is
doing
right
now,
so
you
can
understand
what
the
actual
problem
demanding
is.
A
real
cluster,
so
clustered
is
exposed
as
a
node
port
service.
Teleport
is
trying
to
redirect
to
that
node
port
service,
the
node
port
service,
not
working,
will
tell
us
that
we
have
an
ingress
problem
to
our
cluster,
so
I
will
give
the
chat
30
seconds.
What
do
we
look
at
next
to
debug
an
ingress
networking
problem
coming
into
our
cluster.
B
B
B
Okay,
so
it
wasn't
a
network
policy
now.
The
other
thing
that
is
important
here
is
that
with
network
policies
and
cni
implementations
that
we
have
these
days
is
that
the
standard
kubernetes
networking
policies
are
not
always
the
only
network
policies
within
a
cluster
and
psyllium
calico.
They
all
bring
their
own
adaptations.
B
B
B
B
I
don't
know
why
it's
not
updating
there
we
go,
and
now
we
have
to
dance.
We
have
now
fixed
our
broken
kubernetes
cluster
through
various
debugging
techniques,
understanding
the
control
plane,
knowing
where
logs
live
and
understanding
the
different
implementations
of
the
cni
and
cri
I've
added
one
more
cheeky
break,
but
doesn't
restart
container
d,
which
I'm
going
to
show
you,
because
I
think
it's
really
funny
and
something
that
people
do
on
clustered
all
the
time.
B
This
is
really
handy,
the
container
demeter
setup
for
having
pulled
through
caches
and
keeping
things
locally,
even
though
your
manifest
refer
to
a
canonical
implementation,
great
feature
and
but
very
easy
to
trip
you
up,
and
I
hate
everyone.
That's
used
it
on
clustered
thanks.
A
Awesome
so
I
think
that
was
really
great.
Some
of
the
fixes,
some
of
the
concepts
that
were
discussed
during
the
fixes,
which
would
definitely
help
you
to
understand
the
the
control
plane,
components,
how
they
behave
where
they
are
located
and
how
you
can
play
around
with
different
set
of
configurations
and
options
with
respect
to
the
controller
manager,
scheduler
api
server,
and
so
I
think
that
that's
really,
you
know
insightful
david.
A
So
thank
you
for
bringing
all
the
broken
cluster
and
explaining
the
concepts
first,
so
that
would
have
definitely
been
educational
for
everybody
who
has
attended
life
and
who
will
be
watching
the
recording
later
so
and
and
yes,
if
you,
you
know,
want
to
watch
kind
of
more
debugging
things
just
like
david
did
today.
This
is
getting
done
all
the
times
on
clustered.
A
So
this
is
this
is
what
actually
happens,
people
see
and
guess
what
what
happens,
what
not
happens,
what
works,
what
doesn't
work
and
they
try
to
fix-
and
it's
kind
of
you
know
in
one
hour,
we
try
to
fix
something
and
it
sometimes
does
get
fixed.
Sometimes
you
have
to
take
the
hints.
So
it's
okay!
That's
because
there
are
some
nasty
things
that
people
do
with
the
with
the
clusters,
so
so
that
that
keeps
on
happening.
But
in
the
end,
it's
all
about
learning.
A
So
we
hope
you
learned
something
for
from
the
communities
perspective
from
the
certification
perspective
and
also
or
from
the
day-to-day
perspective
of
your
jobs
that
you
might
be
using
in
your
debugging
in
in
general,
when
you're
working
with
communities
with
that,
I
think
I'm
so
david.
Who
should
we
give
the
vouchers?
I
think
russ
has
been
very
active
in
the
chat,
so
one
voucher
goes
to
russ
and
even
freeze
bow
is
was
active
in
the
chat,
so
another
one
goes
to.
B
Before
we
finish
this
stuff
is,
is
really
hard.
You
know
it's
only
through
sharing
our
knowledge
and
experimenting
and
breaking
things
intentionally.
Chaos.
Engineering
is
a
really
important
part
of
adopting
cloud
native
and
kubernetes.
It's
really
it's
best
that
you
learn
how
to
fix
these
situations
and
all
the
fires
that
can
happen
before
they
hit
you
in
real
life
production.
So
you
know,
get
creative
start
breaking
stuff
and
best
of
luck.
A
Okay,
so
russ
is
saying
he
doesn't
want
to
go
for
the
certification.
Then
another
person
who
commented
was
aj
50500.
I
really
don't
know
who
you
are
so
aj0500.
A
If
you
are
in
the
stream,
then
please
reach
me
out
on
twitter
so
that
you
know
I
can
hand
you
over
the
voucher
and
freeze
go
as
well.
Please
reach
me
out
I'll,
give
you
the
voucher,
which
is
the
50
discount
coupon
on
the
certification
exams.
So
with
that,
I
think.
Thank
you
all
for
tuning
in
to
the
search
magic
show
on
cloud
additive.
A
Tv
do
not
forget
to
click
the
follow
button,
because
that
is
important
and
there
are
a
lot
of
shows
that
keeps
on
going
on
and
even
tomorrow,
there's
a
spotlight
live
with
grpc,
so
do
not
miss
that
and
it
keeps
on
happening
all
week.
Interesting
shows
so
make
sure
you
follow
that
and
hope
you
learned
something
new
today.
So
thank
you
so
much
everyone
and
goodbye.