►
From YouTube: TGI Kubernetes 086: Grokking Kubernetes - The kubelet
Description
Join Duffie Cooley as he returns to TGIK to explore some resources around understanding Kubernetes as a system. We'll explore some of the tooling out there that helps build a mental model of the system itself and how all the parts of Kubernetes work together.
Come hangout, ask questions, and share you experiences!
A
Hey
good
afternoon
everybody,
this
is
Duffy
Cooley,
coming
at
you
from
TGI
K
and
welcome
to
TGI
K
episode
number
86
in
this.
In
this
episode,
we're
gonna
explore
I'm
gonna,
try
and
provide
kind
of
a
reference
for
just
understanding
kubernetes
as
a
system,
and
there
is
a
lot
to
this
and
you
can
tell
that
by
the
notes
which
I've
already
put
a
link
into
or
linked
for
in
the
the
chat
I'm
trying
something
new
this
week.
A
So
you
can
actually
get
to
the
notes
by
just
going
to
TGI
k,
io
/
notes
and
you
will
find
them
there.
So,
let's
go
say
hello
to
everybody.
Little
Maddy
good
afternoon
good
to
see
you
and
I
will
answer
that
question.
How
was
blackhat
here
in
a
second
it
was
as
a
tldr
was
incredible.
I
had
a
wonderful
time.
It
was
such
an
agree.
It
was
such
a
great
experience,
but
we're
gonna
dig
into
a
little
bit
more.
Oh
I
didn't
realize
the
livestream
I
may
have
misspelled
that
you
know.
A
A
B
A
Of
my
one
of
my
co-workers
here,
another
fellow
Hebdo,
a
fellow
Pepe,
and
help
doing
whatever
it
is.
You
know,
cup
cat
I
mean
from
Ireland
I
love,
Ireland
Ireland
such
a
beautiful
place
enjoy
from
Richmond,
and
we
have
op
tetes,
it's
Josh
from
Colorado
logging
in
with
his
octet
channel
ID.
That's
gonna!
Funny,
alright!
Well
good
afternoon.
Everybody
again
today
we're
gonna,
try
and
kind
of
build
a
kind
of
a
frame
of
reference
for
how
to
understand
cabrini
dues
as
a
system.
A
B
A
A
So
a
container
one
of
the
things
that
I've
actually
finally
been
able
to
like
Express
in
a
way
that
is
easily
or
you
know
can
be
understood-
is
that
I'm
actually
just
gonna
flip
to
a
server
here,
real
quick,
but
we're
looking
at
containers,
I'm,
actually
gonna
flip
to
the
different
view
here.
Bear
with
me
for
just
a
moment,
while
I
flipped
a
screen
and
face
OOP
different
theme:
hello,
everybody
alright!
A
A
Now,
what
I'm
doing
here
is
I'm
actually
pulling
down
effectively
a
tarball
okay,
I'm,
pulling
down
a
tarball
that
makes
up
the
file
system
that
that
container
is
going
to
use
and
then,
when
I
do,
docker
run
I'm
actually
going
to
start
a
process
that
will
run
nginx
for
me
locally
and
and
in
that
process
this
is
actually
where.
In
my
opinion,
the
container
really
starts
right.
A
A
If
I
were
to
look
at
you
know
my
own
process,
I
would
be
able
to
see
that
the
output
for
the
two
is
different.
There
are
some
things
that
are
in
common,
but
many
of
these
things
are
different
right.
So,
in
my
case
like
we're,
all
we're
in
a
bunch
of
different
namespaces
for
this
process
and
the
profit
and
the
namespace
is
associated
with
the
upper
unit
are
also
different.
A
Basically,
the
default
namespace
is
for
the
Linux
kernel
that
I've
logged
into,
whereas
the
nginx
instance
is
actually
mapped
to
a
different
set
of
namespaces
isolated
for
that
process
when
the
docker
container
was
created.
So
when
I
do
docker
run,
there
are
a
bunch
of
calls
that
are
happening
that
basically
present
new
new
namespaces
and
then
position
that
process
inside
of
that
new
namespace,
and
only
and
only
provide
access
to
that
to
that
that
process
to
that
specific
namespace.
So
my
pit
namespace
should
be
different.
That
is
my
network.
A
Namespace
should
be
different
and
is
right
and
that's
and
that's
how
I
see
different
things
when
I'm
inside
of
the
container
and
when
I'm
outside
the
container.
That's
the
isolation
model.
So
if
you
were
looking
for
like
a
way
to
visualize
a
container
as
it
relates
to
the
process,
this
is
one
way
to
go
about
that.
A
So
cool
I'm
just
wanting
to
share
that
with
you,
a
quick
and
then
we'll
get
back
into
kind
of
like
our
normal,
our
normal
swinging
things.
So
so
that
was
one
of
the
things
I
shared
it
blackhat
and
we
talked
about
some
tooling
like
in
a
century
and
we'll
probably
dig
into
that
a
little
bit
more,
but
I
just
learn
to
kind
of
give
you
some
frame
of
reference
throughout.
You
know
some
of
the
stuff
that
I
was
talking
about.
A
So
I
thought
it
was
pretty
deep
stuff,
but
I
received
a
lot
of
great
feedback
and
I
know
that
they're
gonna,
post
the
video
and
as
soon
as
they
do
I
hope
to
share
that
video
with
you
all
as
well
all
right.
So
let's
talk
about
the
Week
in
Review
and
then
we'll
dig
into
some
of
these
other
things.
So
we
can.
B
A
Mr.
John
Harris,
one
of
my
co-workers
on
the
cloud
native
architecture
team,
put
up
an
article
on
using
pseudo
black
access
with
cube
kettle
and
I've
seen
this
actually
implemented
a
couple
of
different
times
and
actually
I
think
he
put
it
on
nope.
Talking
about
that
yeah,
there's,
no
there's
actually
a
project.
A
That
kind
of
that
tries
to
do
this,
for
you
called
cube
kettle
pseudo,
but
the
idea
of
this
is
actually
pretty
cool
and
I
think
from
a
security
perspective,
it's
actually
really
important
because
it
enables
sort
of
a
pseudo
level
access
so
to
explain
it
I'm
actually
going
to
go
to
the
project
page
for
cube
kettle
studio,
because
I
think
they're
gonna
do
a
pretty
decent
job
of
explaining.
Why
that's
an
important
thing.
A
And
so,
if
you
follow
this
pattern,
what
you're
going
to
do
is
you're
going
to
allow
the
user
a
credential
and
that
credential
for
that
user
will
be
bound
to
like
a
read-only
mode
or
a
View
mode,
and
then
anytime.
If
that
user
wants
to
actually
modify
or
change
an
object,
they
can
use
a
pseudo
like
mechanism
where
they
switch
to
a
different
group
using
cube
Ketel
to
up
to
apply
that
change.
Now.
This
lets
us
give
a
little
bit
more
restricted
access
and
also
kind
of
cuts
down
on
some
of
the
mistakes.
A
Right
like
this
is
sort
of
the
power
of
pseudo
is
that
you
have
the
ability
to
make
changes
within
your
particular
environment,
but
when
you're
going
to
make
changes
that
would
affect
it
in
kind
of
a
larger
group
like
making
changes
to
something
that's
actually
configured.
You
know
at
the
slash
Etsy
director
here
or
something
else
like
you
know,
installing
packages
or
those
sorts
of
things
that
you
have
to
do,
that
kind
of
a
cluster
level
or
maybe
even
a
system,
level,
sort
of
thing
or
access
and
and
I
think.
A
This
is
actually
a
really
great
pattern
for
that.
What
this
allows
us
to
do
is
basically
use
a
cube,
petal
plugin
right
and
it
can
be
configured
to
match
effectively
the
same
argument
that
John
is
making
in
his
article
that,
when
a
user
wants
to
do
something
that
is
above
their
normal
read
or
view
on
the
access,
they
would
effectively
impersonate,
which
is
a
capability
of
kubernetes.
A
So
I
think
that
that's
actually
a
pretty
good,
a
good
way
to
see
it
good
afternoon.
Joe
good
to
see
you
all
right,
so
I
mean
this
article
and
The
Associated
project
both
get
into
this
pretty
well
I,
definitely
recommend
reading
it.
It's
a
really
great
solution
for
for
how
to
provide
kind
of
better
isolation
between
the
way
that
a
user
authenticates
and
meant
and
manages
resources
within
the
cluster
and
and
how
to
and
how
to
like
understand
that
a
history
of
those
events
and
how
those
things
have
changed
over
time,
so
pretty
cool
stuff.
A
A
A
A
An
interesting
one,
and
if
you
read
through
what's
actually
happening
here
effectively
what's
happening,
is
that
if
you
were
to
provide
in
our
Mac
a
role
that
gives
the
user
of
a
particular
role
access
to
something
that's
defined
at
the
cluster
scope,
but
that
user
can
affect
that
thing
at
the
cluster
scope,
even
if
they
don't
have
access
at
8.
You
know
at
cluster
scope,
so
that's
a
little
harder
to
understand.
So
let
me
break
it
down
a
little
bit:
I
create
a
user
McCollum,
Bob
and
I
associate
with
that
user.
A
A
Now
what
this
bug
with
this
CBE
highlights
is
that
Bob
can
actually
manipulate
those
cluster
scoped
resources.
With
this
permission,
probably
not
what
we
want
right,
because
he's
supposed
to
be
isolated
only
to
those
resources
that
are
defined
within
his
namespace.
But
with
this
permission
with
this
bug
with
this
book
has
found.
Is
that
because
we've
actually
defined
a
way
for
Bob
to
access
those
things
at
a
cluster
scope,
he
can.
He
can
actually
modify
and
interact
with
those
objects,
even
at
that
cluster
level,
which
is
definitely
more
permissions
than
we
expected
to
give
their.
A
It
is
basically,
you
know,
validate
that
the
scope
that
you're
operating
from
does
either
does
or
does
not
have
access
to
the
customer
all
right,
and
so,
if
you're
scoped
to
the
namespace
you're,
if
you're
access
to
scope
to
a
namespace,
if
you're
coming
from
a
role
binding,
then
we
will
drop
the
permissions
to
access
things
that
are
outside
of
or
defined
outside
of
the
namespace
scoped
scenario.
I
know
there
was
a
lot
to
talk
about.
It
was
probably
a
little
confusing,
but
that's
effectively
what
the
CBE
does.
A
The
TLDR
here
is,
if
you're,
careful
about
what
permissions
you're
already
giving
to
users
if
you're,
using
like
the
admin
role
when
you
create
a
role
binding
associating
a
user
with
a
new
space,
you're
gonna
be
okay,
but
if
you're
providing
wide-ranging
permissions
VirB
any
resource
any
to
something
that's
found
in
the
namespace,
then
you're
giving
more
permissions
than
you
think
you
are
until
we
actually
until
you're
on
a
version
of
cabrini.
That's
that
that
has
to
fix
all
right.
I
hope
that
TL
DR
makes
sense.
A
Okie
dokie,
so
the
next
one
up
you
talked
about
that
CV.
It's
an
important
one
and
I
hope
that
I
hope
that
my
explanation
makes
sense
to
folks
about
what's
actually
happening
there.
It
is,
it
is
a
really
important
one.
The
next
one
and
again
in
security
I
talked
a
lot
about
this
one
this
week
at
blackhat.
A
A
A
B
B
A
A
I'm
gonna
stop
looking
here
in
a
second,
but
I
was
really
hoping
there.
We
go
that's
what
I'm
looking
for
okay,
so
this
is
what
I
was
looking
for.
This
is
actually
really
great
because
it
highlights
exactly
how
trail
of
bits
went
about
doing
this
security
audit
and
and
what
they
found
and
kind
of
the
way
and
the
notes
and
stuff
that
they
found
also
which
I
thought
I
was
also
just
incredible:
incredible
information
for
just
like
understanding
this
space
and
how
it
works.
I'm,
going
to
put
that
link
in
our
hacking
D.
A
A
You
found
right
and
then
we'll
work
through
fixing
them,
and
then
we
also
and
then
also
the
company
that
did
the
work
also
just
did
an
incredible
job
of
really
exposing
what
work
they
did
and
how
they
went
about
that
which
I
thought
was
just
mind-blowing
really
cool.
You
know
so,
if
you're
interested
in
the
security
space
or
in
kubernetes
or
how
or
what
they
found.
A
Next
up,
we
have
attacking
it,
defending
Cooper
TZ
in
cold
water.
She
they
are
the
person
that
I
presented
with
at
blackhat
on
on
community
security
and
so
ian
is
and
again
an
incredible
engineer,
a
security
engineer,
who's
focused
on
protecting
and
and
attacking
kubernetes
clusters.
They
work
at
they
work
at
Heroku,
which
is
a
subsidiary
of
Salesforce,
and
it
was
just
such
a
great
experience
to
work
through
this.
So
prior
to
the
prior
to
our
black
hat
talk,
this
interview
happened,
and
you
know
I
definitely
recommend
giving
it
a
listen.
It
was
a
great.
A
On
we
go,
we
have
storage
on
kubernetes.
This
is
an
article
written
by
Vito
butta
butta,
and
explores
some
of
the
different
storage
options
that
are
available
and
tries
to
kind
of
work
through
that
whole
set
and
describe
and
give
you
both
some
context.
Around
storage,
so
I
definitely
recommend
this
I've
done
me
through
I've,
dug
through
it
I
thought
it
was
actually
really
well
researched
and
really
well
written,
so
definitely
check
that
out
tools.
A
A
Some
of
the
policies
that
you're
granting
possibly
granting
to
users
or
who
have
access
to
kubernetes
clusters
are
a
little
more
than
you
expect
them
to
be,
and
so
you
know
definitely
check
out
some
of
these
tools
for
what
for
how
to
actually
understand
what
I
user
can
do
and
what
access
they
have
so
definitely
worth
checking
out
I'm,
not
sure
if
they
call
out
one
of
my
fav
yeah
they
do,
they
have.
Can
I
and
they
have?
Can
I
list
this
is
a
relatively
new
feature.
A
This
is
a
relatively
new
feature
in
the
cube
Kunal,
the
command-line
tool,
and
what
this
does
is.
It
makes
use
of
some
self
subject:
access
rules
or
self
subject.
Rules
review,
which
is
a
new
API
object
in
113
I
think
it
was,
might
have
been
112,
but
what
this
does
is
it
gives
you
the
ability
to
enumerate
all
of
the
permissions
that
a
user
has,
and
so
that's
actually
pretty
great,
definitely
worth
checking
out.
So
we
can
see.
A
You
know
like
we
can
see
the
permissions
that
this
particular
user
has
are
pretty
wide,
ranging
like
in
the
output
here
right.
We
can
see
that
this
person
has
access
to
do
anything
with
any
resource,
so
any
verb
any
resource
not
filtered
right.
So
this
is
a
great
article
to
read
if
you
want
to
like
just
dig
into
our
back
and
how
it
works
or
how
it's
supposed
to
work
in
that
sort
of
thing.
A
So
the
Pinterest
engineering
team
has
been
talking
about
building
a
kourounis
platform
at
Pinterest,
so
I
know
that
Pinterest
has
been
working
on
this
to
some
to
a
great
extent,
they've
been
working
on
it
for
some
time.
I've
got
some
friends
over
there
that
are
working
on
this
very
same
thing,
and
so
it's
definitely
worth
reading
how
they
go,
how
they
went
about
it
and
what
they've
learned
in
that
process.
You
know
again
it's
just
kind
of
like
the
transparency
again
from
the
community.
A
I
can't
I
can't
take
the
community
enough
for
just
being
really
open
and
honest
about
what
they've
experienced
and
what
and
what's
happening
there.
That's
just
incredible!
So
definitely
a
good,
read
and
I
think
that
is
all
for
the
Week
in
Review
all
right.
We
do
have
this
slides
for
reconciliation,
we're
going
to
get
to
that.
It's
not
going
to
be
the
first
part
of
it,
though
we're
gonna
dig
into
that
a
little
bit
later
on
all
right
now.
A
To
it,
but
again
like
what
I'm
gonna
try
and
what
I
want
people
to
come
away
with
from
this
discussion
are:
is
the
tools
and
the
frame
of
reference
with
which
to
kind
of
explore
kubernetes
as
a
system
and
I'm
trying
to
impart
you
know
some
of
my
understanding
of
how
all
of
it
works,
and
you
know
kind
of
get
get
that
stuff
out
there
so
to
set
the
stage.
There
are
a
couple
of
things
that
we're
going
to
work
through
in
this
today,
as
we
as
we
work
through
this
process.
A
The
first
I
wanted
to
highlight,
like
when
I
started,
exploring
kubernetes
as
a
project.
The
first
thing
I
did
was
actually
start
looking
at
it
from
the
perspective
of
understanding
the
system
itself,
the
application
that
is
kubernetes
I
spent
my
time
focusing
on
how
to
understand
that
better
and
how
all
the
different
pieces
and
parts
work,
and
so
we're
going
to
start
our
conversation
kind
of
in
line
with
that.
With
that
model,
the
way
that
I
did
that
initially
was
that
I.
A
A
B
A
So
this
page
is
updated
with
every
release
and
as
we
like
change,
features
or
add
features
or
modify
stuff.
This
page
is
updated,
alright
and
a
lot
of
the
times.
If
we're
going
to
change
something
that
is
specific
to
the
reference
of
one
of
these
components
that
we
actually,
then
this
will
also
make
the
release,
notes
and
so
I
think
we
have
shown
you
that
page
before,
but
if
we
haven't
I,
definitely
recommend
looking
at
rel
notes:
okay,
it's
I/o!
A
If
you're
curious
about
how
this
works
and
it's
kind
of
broken
down
by
by
some
of
the
things
that
could
be
changed
right,
so
we'd
have
controller
manager.
We
have
humidity
mq
kettle,
cubelet
cloud
provider
stuff.
We
have
API
server,
we
have
ipbs,
which
is
related
to
cue
proxy.
All
of
these
tags
are
here
kind
of
for
our
for
our
use
to
kind
of
like
filter
down
to
those
things
that
we're
interested
in
understanding
more
about
that
changes
for
so
definitely
worth
looking
at
this
and
understanding
like
what's
out
there.
A
So
if
you
click
on
the
115
tag
inside
of
here,
we
can
see
only
those
things
that
are
specific
to
115
and
the
results,
and
we
can
understand
like
what
things
are
being
changed.
So
if
cubelet
made
a
change
or
local
storage
made
a
change,
this
is
a
great
way
of
kind
of
digging
into
that.
But
going
back
to
our
point
before
we're
gonna
start
with
V
cubelet
and
we're
going
to
talk
about
the
cubelet
specifically,
so
in
the
cupids
instruction
set,
we
have
a
brief
overview
of
like
what
then
cubelet.
A
Does
it's
a
node
agent?
It's
a
go
binary!
It's
that
it's
statically
compiled
with
everything
that
it
needs
to
do
its
job,
but
it
also
relies
on
the
existence
of
some
of
the
other
implementations
within
a
Pernod
droite.
Alright,
so
it
relies
on
things
like
container
networking
interface.
It
relies
on
things
like
your
container
runtime
and
any
of
the
storage
configurations
that
you
have,
but
the
actual
implementation
of,
like
you
know,
taking
down
a
pot
specification
from
from
the
API
server
and
making
it
a
real
thing.
A
A
Another
way
is
through
the
HTTP
endpoint
that
HTTP
endpoint
is
basically
it
does
provide
the
ability
to
expose
the
cubelets
api
such
that
you
can
use
that
API
to
create
containers.
This
one
is
really
not
used
very
often
at
all,
but
it
does
exist
and
then
the
third
one
is
the
HTTP
server,
where
the
cubic
can,
where
the
cubit
will.
A
You
know
interact
with
the
serpent
with
the
API
server
and
do
a
watch
and
keep
an
eye
on
keep
an
eye
out
for
pods
that
are
associated
with
itself
and
download
the
manifest
associated
with
that
pod
once
it
has
determined
that
there's
work
for
it
to
do,
and
then
a
plot
and
then
bring
that
thing
up
and
we're
gonna
kind
of
walk
through
a
couple
of
those
examples
when
we
are
when
we
are
exploring
the
cubelet
and
then
the
other
thing
that
they
highlight
here,
which
I
think
is
definitely
worth
understanding,
is
the
pod
lifecycle
events
ad
reader
and
to
give
you
a
quick
overview
of
what
plague
does
what
it
does.
A
Is
it
sits
a
call.
This
is
it's
a
control
loop
that
happens
inside
of
the
cubelet
that
interacts
with
the
container
runtime
and
it
tries
to
understand,
are
all
of
the
all
of
the
containers
that
the
cubelet
has
started
up
running
or
or
or
has
there
been
any
lifecycle,
change
or
a
life
cycle
event
associated
with
those
containers?
In
the
intervening
time
between
the
time
that
that
control
loop
runs
right.
A
So
this
is
a
way
for
the
cubelet
to
get
fresh
information
about
what's
happening
with
the
container
run
time
it's
hard
dependency
around
like
are
all
the
things
still
working
or
not.
Working
and
plague
has
proven
difficult
to
work
with
over
the
years,
but
I
think
it's
actually
calmed
down
quite
a
lot,
but,
as
you
can
imagine
effectively,
what
this
is
is
that
it's
the
cubelet
making
use
of
the
docker
or
the
container
runtime
interface
to
understand
the
status
of
all
of
the
running
containers.
A
So
we've
talked
a
little
bit
about
how
that
part
of
it
works
like
static
pods
and
pulling
down
manifests
from
the
the
cluster.
I
also
want
to
actually
show
you
that,
in
in
in
in
real
time
on
a
cluster,
so
we're
going
to
jump
over
to
our
terminal
I'm,
going
to
show
you
those
two
things
and
then
we're
gonna
move
on
to
some
of
the
command-line
options,
which
I
think
are
also
very
important
to
talk
about.
So.
A
So
here's
our
notes,
I
three
workers
in
a
control,
flame,
note
and
they're
all
kind
of
running
in
a
not
ready
state.
Now,
I
wanted
to
point
this
out,
because
I
think
it's
actually
kind
of
important
that
we
understand
how
this
works.
Sorry,
and
so,
if
I
do
cube
kettle
get
pods.
All
namespaces
I
can
see
that
there
are
some
things
that
are
running
and
some
things
that
are
not
what's
important
to
understand
when
we're
looking
at
this
initial
view
is
that
I
have
not
deployed
a
container
networking
interface
at
this
time
right.
A
There
are
things
consuming
resources
right
and
I
could
see
that
they're
up
and
running
and
operating
just
fine,
and
now
what
I
want
to
do
is
I
want
to
take
a
look
at
how
those
things
are
running
and
and
like
what
and
how,
in
the
configuration
of
the
cubelet
and
the,
how
that
works.
Now
before
I
move
on
to
that
part
of
it,
I
would
do
you
want
to
highlight
that
we
can
see
that
the
court
DNS
pods
themselves
are
not
running.
A
So
what
do
you
think
is
the
difference
between
the
court,
uns,
pods
and
all
of
the
other
pods
that
are
in
a
running
state.
I
know
that
this
is
a
lot
of
information
quickly,
but
I
think
it'll
be
helpful.
Anybody
have
a
guess
up,
gander
about
what
that
could
be.
I
will
give
you
a
hint
get
pause
a
Oh
wide.
A
We
shrink
that
a
little
bit,
so
we
can
actually
there
we
go
so
the
trick
to
it
is
that
we
can
see
that
if
we
look
at
those
IP
addresses-
and
we
do
Keuka-
don't
get
nodes,
Oh
why'd,
those
IP
addresses
are
the
note-
are
the
IP
addresses
of
the
nose.
So
each
of
these
things
that
is
running
is
actually
using
host
network.
True,
so
these
pods
are
all
running
with
host
network.
A
B
A
A
The
important
part
is
that
it
has
hosted
network
true
right
because
of
that
we're
not
reliant
on
the
CNI
to
exist
before
the
container
is
created
right.
This
is
actually
the
way
that
this
is
one
of
the
interesting
things
about
cubelets
right.
You
don't
necessarily
need
a
CNI
to
schedule
a
work
on
a
couplet.
A
A
A
Now,
let's
talk
about
static,
manifests
that
are
the
static
manifests
and
how
they
work
and
we're
also
going
to
kind
of
explore
the
cubelet
itself
so
I'm
going
to
dr.
exactly
into
a
place
where
I
have
a
control
blood
running.
In
this
case.
It's
to
me
other
control,
clean
I,
can
be
doing.
I
could
do
this.
Actually,
you
know
what?
Let's
do
it
on
worker,
instead
kind
worker
Bosch
now
to
start
off?
A
How
do
we
understand
that
the
cue
blood
is
running
other
than
the
fact
that
we
can
see
it
registered
right
most
of
the
time
it's
run
as
a
as
a
as
a
systemd
unit
right.
So
if
I
do
cat
culet
I
could
see
the
configuration
of
the
cubelet
as
it
as
it
is
configured
by
system
D
and
if
I
do
Journal
kit,
all
FL
you
so
journal
fiddle
flue,
cubelet
I
can
follow
the
log
of
the
cubelet
and
see
what
the
logs
are
of
the
of
that
cube
it
directly
right.
So
it's
right
now.
A
A
Journal
kettle
NFL
you
and
now
we
don't
see
that
error
anymore,
but
we
do
see
an
error
talking
about
container
runtime,
not
running
right,
and
so
it's
unable
to
actually
start
anything
inside
of
anything.
That
would
require
it
right.
So
the
the
qubit
is
not
ready
because
the
network
clock
at
plug
in
returns,
error,
GNA,
not
initialized.
A
B
A
A
A
Go
back
to
our
sit,
sit
and
I
didn't
like
this
is
really
specific
to
the
way
that
keep
a
DM
configures
these
things
right
and
so,
depending
on
your
the
way
that
you
deploy
kubernetes,
it
may
be
different
for
you
than
it
is
for
for
this
couplet.
Okay,
so
I'm
gonna
do
cat.
Take
a
look
at
this
one
and
again
just
looking
at
where
the
container
D
suck
it
is
and
the
fill
of
swap
on
note.
But
you
can
see
it
is
also
passing
arguments
to
the
cube
lid.
A
So
this
is
the
argument
right
here
that
describes
where
all
of
the
arguments
that
were
using
to
start
the
cubelet
are
located
right.
So
we're
looking
for
these
environment
variables
that
are
defined
inside
of
the
file
we
have
cubelet
config
arc
is
defined
cubic
can
keep
the
queue
config,
arcs
and
Etsy
default
cubelet.
All
those
things
are
defined,
but
we
don't
see
anything
that
would
actually
set
the
verbosity
of.
A
A
And
here
we
see
the
one
the
argument
I'm
looking
for,
which
is
a
V
level,
goes
all
the
way
up
to
ten,
just
like
most
of
the
other
things
within
kubernetes
and
if
we
bump
the
verbosity
up,
we
get
a
lot
more
information
about
what's
happening
with
this
specific
cubelet
and
I.
Think
that
for
me,
when
characterizing,
what's
actually
happening
with
a
tool
component
like
the
cubelet,
it's
really
important
to
be
able
to
actually
see
better
logs,
so
I'm
gonna
bump
it
up
to
eight,
which
is
a
lot
of
lots.
A
When
you're
doing
this
with
system,
when
you're
doing
this
with
system
D,
you
kind
of
need
to
like
do
a
reload
and
then
you
can
do
a
restart
of
cubelet
and
then
we
can
do
that
same
journal.
Ketal
my
command
FL,
you
cubelet,
and
we're
getting
more
information
about,
what's
actually
happening
here,
so
we're
getting
more
of
our
bus
logs
from
the
cubelet
right
now.
So
as
it
comes
up
and
gets
fired
up
here.
So
let's
take
a
look
at
that
static,
pod
manifest
and
see
what
we're
actually
seeing
with
static
pod.
A
A
So
I'm
gonna
take
a
look
at
that
manifest
real,
quick
and
we'll
look
at
that,
and
so
here
I
have
I'm
defining
a
pod.
Just
like
you
would.
If
you're
gonna
interact
with
the
cube
API
I'm,
giving
it
labels
I'm,
giving
it
a
name
space
to
be
associated
with
I'm
gonna,
get
a
command
I'm
grabbing
the
entity
image
and
mounting
in
some
directories
and
the
underlying
host.
A
I'm
setting
host
path
to
true
or
a
host
network
to
true
and
I'm,
using
host
path
to
mount
a
directory
that
may
or
may
not
exist
where
I
was
going
to
get
the
credentials
to
authenticate
to
the
sed
server
and
we'll
talk
about
what
I
would
use
this
for
that's,
basically
a
debug
tool
for
interacting
with
sed
that
is
pre-configured
for
the
way
that
cube
ATM,
configures
@cd.
So
we're
gonna
talk
about
that
later,
but
but
first
the
interesting
part.
So
let's
go
back
to
our
logs.
A
A
We
can
see
that
it's
associated
with
a
namespace
and
the
that
the
pod
has
come
up
right.
So
if
I
exit
out
of
this
node
and
I
do
cube
kettle
get
pods,
a
I
can
see
my
entity,
client
kind
worker
and
because
this
is
a
static
pod,
it's
being
managed
by
the
underlying
cubelet
right
and
so
one
of
the
kind
of
interesting
things
here.
I'll
see
if
I
can
split
horizontally.
A
Control
8%
cute,
okay
control,
a
percent
victory;
okay,
so
cute
kiddo
watch.
Actually,
let's
just
jump.
Let's
jump
back
into
that
work,
a
real
quick.
What
I'm
trying
to
show
you
here
is
one
of
the
interesting
things
about
static
pods
and
then
we'll
move
forward.
So
I'm
going
to
do
docker,
exact,
GI
kind,
worker,
I'm
gonna,
do
a
watch,
CRI
cat,
all
PS.
A
A
Julia
tab
control,
8,
alright,
cool
alright,
so
we
have
our
Etsy
client
watch
going
and
now
our
new
cute
kettle
delete
pas
n
coupe
system
we're
going
to
get
rid
of
that
Etsy
client
pod.
Now
most
of
the
people
who
are
used
to
working
with
Cooper
DS
at
this
point
are
expecting
that
Etsy
client
pod
to
die,
but
it
does
not
any
guesses
why
that
is.
A
This
is
because
for
static,
pods,
they're,
entirely
owned
and
operated
by
the
qubit.
The
qubit
doesn't
even
have
to
ever
see
the
API
server
exactly.
They
are
not
API
server
control,
they're
controlled
entirely
by
the
cubelet,
which
means
that
they
can
be
disconnected
or
brought
back
up
whatever
it's
the
cubelet
is
managing
those
static
manifests
directly
now.
What's
another
interesting
point
is
how
what
that
means
for
admission
control
right.
A
A
Admission
control
in
this
case
means
that
the
cubelet
has
started
the
pod
and
is
operating
it,
and
then
it
tries
to
report
up
to
the
API
server
that
a
pod
is
running
and
that
reporting
that
ability
to
actually
register
this
running
pod
with
the
API
server.
That
is
the
form
of
admission
control,
and
so,
as
you
can
imagine
anything
that
would
constrain
a
pod
like
what
a
pod
can
do
or
what
the
pod
can
request
is
ignored
with
static
pod
static
pods
are
created
no
matter
what,
whether
you
have
admission
control
or
not.
A
They
are
created
what
the
admission
control
does
is
determine
whether
or
not
we
report
that
pod
into
the
cluster
interesting
point
just
wanted
to
highlight
that,
okay,
so
just
as
an
example,
let's
do
cube
kettle
get
pods
in
coop,
system-wide,
curette
kind
worker
in
the
negative
space.
So
here's
our
pod,
that
is
managed
by
the
API
server
and
I,
wanted
to
highlight
what
happens
when
I
delete
that
one
just
so,
you
can
see
like
a
valid
comparison
here,
right
so
bleep
and
in
coop
system.
A
A
A
A
B
A
A
So
these
are
all
the
columns,
all
the
all
of
the
logs,
for
that
we
have
for
bringing
up
pew
proxy
here,
wait,
and
so
this
is
the
creation
of
it,
and
then
we
have
a
sync
loop.
That
is
adding
the
adding
the
object
to
the
API
server
right
here,
that's
what
that
is,
and
then
we
see
the
token
being
generated
and
mounting
things
or
not.
The.
A
A
Next
table
lock:
let's
try
to
see
the
delete
event.
It's
probably
further
done
there.
We
go
containers
to
kill
map
all
right.
So
there
is
the
kill
powder
from
the
request
coming
in
from
the
API
server
saying,
go
ahead
and
delete
this
thing
and
then
the
API
sir
and
then
the
cubelet
goes
goes
ahead
and
does
that
deletes
that
pod?
And
then
we
see
it
pulling
down
the
manifest
again
and
beginning
the
process
of
running
that
pod
again
and
so
bumping
that
logs
up,
we
can
really
understand
the
life
cycle
of
a
given.
A
Creation
of
a
pod
so
I
think
that's
what
I
wanted
to
share
with
you.
This
is
like
one
way
of
understanding
really
digging
into
the
detail
for
how
the
Kuban
itself
works,
and
this
could
be
done
for
you
could
use
the
same
output
to
look
at
q
proxy
or
you
could
do
it
for
entity
client
and
you
could
see
how
how
that
would
be
useful
in
both
in
both
situations
right
so
cubelets,
sync
loop
again
determining
that
a
pod
needs
to
be
created.
It
goes
ahead
and
creates
it
right.
A
A
More
thing
I
wanted
to
show
you
kind
of
in
relation
to
the
way
that
static
paths,
work,
kubernetes,
manifests,
let's
go
ahead
and
edit.
This
manifests,
because
this
highlights
how
things
are
changed
right,
so
I'm
going
to
change
the
name
space
that
this
pod
is
actually
in.
Let's
go
ahead
and
put
it
in
the
default
name:
space
yeah.
A
And
it
removed
that
object
from
the
from
the
cube
system
namespace
and
move
that
object
to
the
default
namespace
now
before
I
move
on
from
here.
I
just
want
to
highlight,
like
think
about
that,
for
a
second
I've
created
a
static
pod,
that's
in
the
default
namespace,
and
what
does
that
mean
for
our
back
interesting
stuff,
all
right?
Anybody
with
access
to
the
default
namespace
can
know
exec
or
log
or
attached
to
that
pod
that
is
running
in
a
host
bath
on
the
underlying
node,
so
static
pods
are
an
interesting
surface
to
explore.
A
A
Have
a
lot
to
cover-
and
it's
already
and
we
have
so
much
more
to
cover,
but
we're
going
to
keep
going
so
docker
exec,
CI
kind,
the
worker
bash
I'm
at
sea
cooing.
This
manifests
as
a
client
accident.
Let's
jump
back
into
the
contract
now,
because
I'd
have
to
actually
validate
that
it
gets
the
control
plane.
Let's
go
ahead
and
do
that
so
we're
gonna
do
Ducker,
exec,
gianna
kind
control,
plane
bash,
that's
in
kubernetes,
manifest
curl
ello
get
die.
A
B
A
A
way
of
actually
interacting
the
coop
with
a
CD
directly
pre-configured
because
all
configured
inside
the
environment,
but
that's
what
that
tool
does
so
I'd,
actually
be
able
to
like
interact
with
the
entire
system,
pretty
cool
stuff.
So
then
it's
how
that
works.
It's
really
cool
all
right.
Moving
on,
let's
go
back
to
our
checklist
and
see
where
we
are.
We
have
talked
about
the
theory
of
operation.
I
still
want
to
show
you
some
of
this
other
stuff
before
we
move
on
and
we
actually
talked
about
static
frauds
and
plague
we
have.
A
These
are
the
things
I
want
to
talk
about,
though
these
are
the
interfaces
of
this
particular
component
before
we
move
on.
Let's
talk
about
how
cubelet,
client
and
server
öthe
work
against
the
the
people
itself,
so
the
cubelet
expresses
an
API.
In
fact,
there
was
a
really
interesting
CVC.
Some
time
ago,
cubelet
api
see
Vee.
A
B
A
A
Or
or
just
interact
with
the
API
server
directly
so
before
we
get
too
far
down
that
path,
I
want
to
talk
about
what
that
actually
means.
So,
let's
go
back
to
our
shell
again,
okay
and
do
cute
kettle
get
pods
eh
Oh
wide,
and
we
kids
in
let's
look
at
our
worker.
Again
grip
were
worker,
so
we
can
see
this
pod
running
here.
This
cube
proxy
pod
right.
Actually,
it's
too
cute
kid.
I'll
run
engine
X
in
image,
engine
X,
replicas,
3,.
A
A
A
A
A
B
A
A
A
A
Not
cool
so
cube
can
I'll
get
pods
Oh
wide
and
we
can
see
these
two
are
sitting
on
the
kind
worker
and
they're
kind
of
transitioning
right
now
between
the
host
network
and
all
right
so
kind
worker
here.
So
when
I
exec
into
this
pod,
what's
actually
happening
is
that
I'm
making
use
of
the
cubelet
API
to
interact
with
the
the
pot
itself
right
so
I
do
cube.
Kennel
exec,
CI.
A
A
A
Though
there's
our
proxy
right
there,
that's
our
connection
into
the
cubelet
using
the
cubit
API
to
establish
that
connection
right,
and
so
that's
actually
how
that
that's,
the
API
server
authenticating
to
the
cubelet
to
get
this
connection
going
now.
One
of
the
other
interesting
outputs
of
this
right
is
how
the
how
the
API
server
authenticates
to
the
cubelet,
so
that
is
defined
by
the
way
that
the
API
server
is
connected
or
ischium
is
configured,
and
it's
also
defined
by
the
way
that
the
worker
is
configured
because
we
have
a
server
side
in
the
client
side.
A
In
this
case,
the
server
side
is
the
cubelet
and
the
client
side
is
the
API
server,
not
cube
kettle,
but
the
API
server.
So
the
connection
is
I
use,
cube
kettle
to
authenticate
and
connect
to
the
API
server.
The
API
server
proxies
my
connection
to
the
cubelet
to
allow
me
to
make
use
of
the
cubelets
api
to
do
an
attach
or
an
exec
on
that
on
that
pot,
but
the
identity
is
lost
in
the
meantime
right.
A
So
if
I
look
at
the
source
of
the
identity
connected
to
the
API
to
the
this
pod
right
now,
it's
gonna
be
the
API
server,
not
cute
kettle.
That's
what
I
want
to
highlight
this
connection
path
is
cube
kettle
to
the
API
server
api
server
proxies
me
once
it's
gone
through
the
author
is
an
authorization
authentication
process.
It
proxies
me
to
the
cubelet,
where
I
can
interact
with
the
container
directly.
Ok
cool
stuff
wanted
to
show
you
that
let's
talk
about
how
the
authentication
part
works,
though,
because
that
is
also
kind
of
interesting
right.
A
Yeah,
that's
probably
gonna
be
easy
to
do
this
for
outside
so
I'm
gonna
close
this
window
and
what
I
want
to
show.
You
is
like
how
the
Cuba
itself
is
configured.
You
know
there
are
a
couple
different
ways
to
look
at
this
and
we've
already
looked
at
it.
The
one
way
right,
if
we
exact
into
the
container,
if
we
just
exact
into
our
worker
kind,
docker
exact,
GI
kind,
worker
bash
and
we
do
systemctl
cat
cubelet.
A
We
can
see
where
all
of
the
configuration
files
necessary
for
this
cubelet
are
located
and
we
can
see
and
we
can
go,
and
so
we
could
go
through
each
of
them
and
understand
how
they're
configured
now
these
are
the
source
of
truth
right
now,
right
so
the
cube
ATM
flags
and
the
stuff,
that's
in
varlam
cubic
config
dot.
All
that
defines
how
these
things
are
actually
configured,
but
there
is
another
way
that
is
interesting
to
understand
how
the
cubed
is
configured.
A
Proxy
config
the
JQ
dot,
so
what
this
is
gonna
do
is
I'm
going
to
use,
cube
cat
all
get
and
then
this
raw
extension
to
form
my
own
URL,
which
is
going
to
be
referenced
to
the
upper
to
the
API
server.
So
it's
gonna
be
API
v1
nodes,
kind,
worker,
slash
proxy,
telling
the
telling
it
that
I
want
a
proxy
again
that
cubelet,
directly
and
I
want
to
use.
A
B
A
Of
looking
at
the
configuration
of
the
cubelet
directly
and
this
config,
the
endpoint
exists
on
the
cubelet.
It
exists
on
the
cube
proxy.
It
exists
on
I,
believe
it's
just
those
two
right
now
the
queue
proxy
and
the
cubed
might
be
the
controller
manager,
the
scheduler,
but
it
don't
believe
so
believe
it's
just
those
two
key
proxy
and
he
would
allow
for
dynamic
configuration,
and
this
is
one
way
of
understanding
how
the
cubelet
is
configured.
So
we
can
see
these
things
directly.
Now
there
is
a
mess
in
place
where
we
can
actually
dynamically.
A
We
configure
the
cubelet,
but
most
implementations
don't
use
that
today
they
use
the
configuration
on
disk.
So
all
this
really
highlights
is
our
ability
to
understand
how
that
is
configured
not
necessarily
to
modify
it
all
right.
So
that's
cubelet
its
configuration
before
we
move
on
I
also
want
to
show
you
the
metrics
that
can
be
exposed
so
again,
I'm
going
to
use
proxy
and
I'm
going
to
look
at
the
metrics
endpoint.
A
Now
this
is
an
interesting
one.
These
are
the
metrics
that
are
exposed
by
the
qubit
directly.
The
qubit
is
can
is
instrumented
with
Prometheus,
and
we
can
see
all
of
those
things
that
the
cubelet
exposes
as
metrics,
and
these
are
very
useful
information
when
you're
like
trying
to
understand
just
how
the
cubelets,
working
or
operating
I've
used
this
output
in
a
firefighting
mode
to
understand,
like
for
a
specific
cubelet,
give
me
the
situation,
what
I'm
trying
to
understand
how
it's
actually
operating
right,
and
so
this
is
a
great
way
of
understanding
it.
A
A
Yeah,
anyway,
have
fun
with
the
metrics,
if
I'm
with
the
config,
those
are
all
things
that
are
there
and
then,
lastly,
the
last
the
other,
the
other
one.
That's
there
is
this
health
Z
telling
us
whether
the
content,
the
the
qubit?
Is
it
a
healthy
state
or
not,
but
based
on
its
own
parameters?
So
those
are
the
three
things
that
are
exposed
by
the
qulet
there's
also
a
debug.
A
A
But
this
call
is
going
directly
to
the
cubelet
again
right
to
get
that
to
get
the
resulting
response,
and
it
looks
to
me
from
my
outside
view,
as
though
the
response
is
coming
back
from
the
API
server,
but
that's
only
because
we're
proxying
back
from
the
API
server.
That's
what
I
wanted
to
highlight.
A
A
A
It
doesn't,
it
doesn't
actually
express
the
cubelets
logs
itself,
just
those
things
that
it
is
responsible
for
expressing
up
to
the
up
to
the
up
to
the
pot
up
to
the
API
server
right.
So
here's
how
the
logs
attached
work
right
so
like
we
can
actually
so
it's
not
doing
cube
kiddo
logs
I
could
actually
look
at
the
logs
directly.
A
Is
where
we're
getting
into
access
that
way
to
do
it?
So
that's
the
couplet
API.
Let's
go
back
and
check
that
one
off
the
box
and
then
we'll
move
on
we've
talked
about
configs.
We
talked
about
metrics.
We
talked
about
the
cubed
api.
We
talked
about
CRI
we've
talked
about
client-server
auth
in
relation
to
the
way
that
the
API
server
authenticates
to
the
cubelet,
but
not
completely.
A
A
Let's
go
back
to
here
so
I'm,
looking
again
at
the
configuration
of
the
cubelet
right,
it's
got
this
static
pod,
manifest
path.
It's
got
mattias
file
and
private
key
file.
A
These
are
the
certificates
that
are
being
used
for
the
cubelet
to
authenticate
to
the
api
server
right.
This
is
the
certificate
used
by
the
cubelet
to
authenticate
to
the
api
server,
and
these
are
going
to
be
rotated
by
default
right.
So
these
are
actually
a
part
in
cube
ADM.
These
are
actually
brought
to
you
by
an
implementation
called
bootstrap
TLS.
So,
as
the
when
you
run
cube
ADM
join,
we
actually
meant
a
csr
base
for
this
cubelet.
Then
we
put
that
CSR
up
in
the
cluster.
A
The
cluster
will
automatically
approve
that
CSR
and
we
will
get
assigned
cubelet
client
key,
that
the
cubelet
will
use
to
authenticate
to
the
API
server
and
then
we'll
know
that
the
API
server
is
signed
by
validating
against
the
CA
file
here.
So
that's
the
path
of
how
cubelet
authenticates
to
API
server.
It
has
these
keys
certificate
in
a
private
key.
So
it's
using
em
TLS
to
authenticate
to
the
cubit
to
the
cube
API
server
and
it's
validating
that
the
API
servers
key
or
a
certificate
is
signed
by
unknown
CA.
A
Sir
now
there's
another
piece
of
this
here
right
where
we
talked
about
like
how
authentication
works.
So
these
are
things
that
are
going
to
authenticate
to
the
cubelet.
If
you
have
a
client,
that's
coming
in
that's
signed
by
the
CA
cert,
we're
gonna,
trust
it
and
then
we're
going
to
enable
a
web
hook
to
authenticate
whether
or
not
the
user
has.
It
is
authorized
to
do
things.
Sorry,
this
notebook
is
about
token
authentication.
So
we
have
two
different
forms
of
authentication
that
we're
highlighting
here.
A
One
is
the
client
certificate
coming
in
signed
by
a
CA,
sir.
So
when
the
cube
API
server
authenticates
to
the
cubelets
api,
it's
actually
going
to
use
a
client
certificate
that
it's
signed
by
the
CA
cert
and
that
will
get
it
authenticated
if
I
wanted
to
use
a
token,
a
service
account
token
to
authenticate
to
the
API
server
that
would
be
supported
by
the
web
hook.
A
And
that's
authentication
over
off
in
how
does
the
author
ization
work
against
the
Cuba
API
Cuba?
The
Cuban
API
is
only
authorized
by
web
a
hook
and
the
way
that
web
hook
works
is
it
will
actually
when
it
will
introspect
to
the
authorized
user
to
understand
who
that
user
is
and
what
permissions
they
have
right.
So,
if
we're
looking
at
the
client
cert
that
comes
in
and
says,
I
am
the
you
know.
I
am
a
cue.
I
am
part
of
system
masters.
A
The
group
right
then
I
can
then
what
will
happen
is
a
cubelet
will
actually
call
out
to
the
api
server
and
say:
is
system
masters
authorized
to
do
an
exec
or
a
logs
or
an
attached,
and
if
it
is
authorized,
then
it
will
allow
the
connection
to
continue,
and
if
it
is
not
authorized
well
coming
in
is
like
my
own
credential
right
that
doesn't
have
exact
access
or
proxy
access
to
the
cubelet
api.
The
I
will
get
denied.
A
There's
one
more
important
part
before
we
move
on
which
I
think
it's
interesting
now,
I'm,
not
sure
it's
highlighted
in
this
configuration
it's
not
that
it
was
no.
It
is
not
so
because
we
have
not
specified
what
the
serving
search
or
serving
key
are
going
to
be,
and
by
default
what
happens?
Is
it's
a
self-signed
serving
the
certificate
for
the
cubelet
api,
meaning
that
the
serving
certificate
is
not
secured
by
the
CA
cert
in
the
cluster?
A
Now
there
are
ways
you
can
configure
there
cubelets
such
that
it
would
actually
automatically
generate
a
csr
for
the
serving
certificate
as
well,
but
we
don't
have
a
way
of
actually
validating
who
the
culet
is.
So
we
don't
feel
comfortable
providing
the
crew,
the
culet
a
signed,
a
serving
certificate
without
having
some
third
party
verification.
A
You
can
dig
in
here
through
all
the
command
line
tools
and
understand
like
what
each
of
these
pieces
does.
It's
all
highlighted
here
and
it
describes
quite
a
lot
of
the
configuration
options
of
the
couplet.
Now
that
I've
covered
a
number
of
them,
the
number
of
what
I
think
they
are
the
important
ones
just
to
understand
run
time
for
the
cubelet,
but
this
is
just
the
cubelet
part
all
right
2:30
now,
oh
my
gosh,
this
is
I
I,
took
on
way
too
much,
I
think
to
dig
into
it
this
week.
A
A
But
let's
move
on
for
now
so
much
more
to
talk
about
okay,
so
you
covered
pretty
much
everything
in
my
list
for
the
cubelet.
We
understand
that
the
cubelet
is
going
to
implement
things
like
manifests
and
it's
gonna,
and
it
has
three
different
ways
to
get
those
manifests
into
the
cubit.
We
have
a
way
via
the
watch
mechanism.
We
have
a
way
via
the
static
pod
mechanism
and
there's
also
an
API
mechanism
which,
which
you
can
actually
define,
manifests
that
we
run
on
that
keulen
directly.
A
What
do
you
all
think
should
we
keep
going?
Do
you
want
to
pick
up
another
component
before
we
stop
for
the
day
or
or
should
we
or
should
we
or
should
we
call
it
here
at
cubelet
and
then
pick
this
up
again
as
a
series?
What
do
you?
What
do
you
all?
Let
me
get
a
vote.
Real,
quick,
keep,
plugging
away
for
a
little
bit
or
call
it
for
the
day
and
do
the
serious
thing.
A
A
B
A
A
A
Up
here
at
the
top,
we
have
some
configuration
that
is
specific
to
the
node,
including
this
piece,
which
is
an
implementation
of
cube
medium
that
informs
us
about
what
the
actual
container,
when
the
container
D
socket,
is
right
there.
So
this
tells
us
that
this
particular
node
is
actually
using
container
D
as
the
container
runtime
and
and
and
it
also
informs
the
the
cubelet
how
to
actually
how
to
authenticate
to
it.
Now.
B
A
Is
I
mean
this?
This
notification
here
is
actually
just
for
our
purposes
as
operators,
so
I
can
see
like
how
this
cubed
is
configured.
It's
good
information
for
us
to
understand
the
node
info
output
does
tell
us
what
content,
what
version
of
container
T
is
running
and
like
because
they
could
enter
one
time.
That's
actually
been
specified.
But
how
do
you
think
we
tell
cubelet
what
container
runtime
to
use.
A
A
So,
in
our
case,
cube
ATM
is
actually
saved,
are
making
right.
So
what
the
what
suit
would
cube
ATM
does
is.
It
will
actually
determine
that
the
container
runtime
we're
running
on
this
node
is
container
Dean
or
docker,
or
what
have
you
and
it
will
configure
the
qubit
for
us
to
allow
the
qubit
to
actually
interact
with
that
particular
given
runtime.
In
our
case,
we
have
container
D
already
running
on
the
underlying
node
as
part
of
our
prerequisite
and
we've
configured
container
tea
container
D
socket
to
as
part
of
the
cuba
cube
ATM
arguments.
A
A
A
Let's
take
a
look
at
se,
our
kind
cue
medium.com,
I,
see.
Okay,
so
cue,
medium
in
this
case
isn't
actually
doing
this
automatically.
It's
doing
this
with
a
node
registration
argument.
Right
so
cue,
medium
has
been
configured
to
use,
cube,
ADM
or
to
use
container
D
directly
I
believe
the
QB
diem
does
have
a
way
of
understanding
it
automatically
if
it
doesn't
if
it
isn't
provided.
But
in
our
case
this
is
actually
a
configured
configured
as
part
of
cube.
B
A
B
B
A
This
is
all
part
of
the
configuration
of
the
of
this
particular
Hewlett,
and
it
makes
up
the
way
that
the
qubit
is
actually
interacting
with
things
now
container
D
is
actually
can
be
interactive
with
CRI
kunal
a
little
bit
of
configuration,
so
that's
actually
how
we
can
interact
with
the
with
the
container
runtime
on
this
node
and
see
how
it's
actually
running
in
container
t's.
Also
interesting,
because
it
gives
you
information
about
the
namespace
and
also
gives
you
information
about
the
running
pods
and
what
their
names
are
and
if
I
do
see.
A
A
A
A
A
So
the
default
for
cubelet
itself
right
it
has
a
default
for
the
CNI
bin
directory
and
a
default
for
the
scene
like
conf
directory,
we're
looking
for
CNI
plugins
like
flannel,
and
those
things
to
be
in
ops,
II
and
I've
been
and
we're
looking
for
the
configuration
directory
to
be
an
SCC
and
I
net
D.
So
let's
go
look
in
those
two
places
on
our
host.
A
A
B
B
A
A
Canal
has
configured
for
the
use
that
that
will
be
used
by
Hewlett.
So
there's
all
this
part
of
the
cni
mechanism.
But
what
I
wanted
to
point
out
was
like
how
cool
it
would
interact
with
the
container
runtime
and
how
cool
it
understands
where
the
container
networking
interface
is
right,
and
so
that's
where
all
of
us
stuff
is
storage.
B
A
So
there's
also
some
information
about
where,
where
the
storage
driver
configuration
is
held
and
like
where
it's
configured
but
definitely
just
reading
through
all
of
these
command
line,
arguments
and
understanding
what
they
do,
I
think
it's
helpful,
even
if
what
you
do
is
just
read
what
the
terms
are
and
how
they're
configured
and
what
the
defaults
are
I
think
that's
helpful
right.
So,
in
our
case
like
it
was,
if
I
said,
if
a
default,
false
I'm
not
worried
about
reading
it,
but
if
there's
a
default
that
actually
has
a
setting.
A
There
we
go
so
this
argument
max
Potts
has
a
default
of
110.
It's
their
really
interesting
default
argument
to
look
at.
That
means
that
I
can
actually
spin
up
110
pods
on
this
node,
oh
okay,
so
loom
America
is
one
of
my
co-workers
here
at
VMware
actually
informs
me
that
it
will
only
sell
it.
It
will
only
configure
automatically
if
you
have
dr.
shim
installed
it
won't.
It
won't
determine
that
you
have
container
daeun
and
configure
it
for
you.
Okay,
let's
go
to
do
I
thought
that
was
something
we
were
working
on.
A
This
match
pause
argument
tells
us
how
many
pots
we
can
actually
run
with
a
cubit
by
default.
Great,
and
sometimes
that's
actually
adjusted
so
like
if
you're
using
the
AWS
sea
and
I
see
and
I.
This
will
actually
be
adjusted
for
you
because
it
won't
run
more
pods
than
you
can
actually
have
network
interfaces
for
so
interesting
output,
but
yeah
stuff
that
has
defaults
is
good
to
understand
if
you're
trying
to
dig
into
like
what's
actually
happening.
The
other
ones.
Interesting
is
the
pod
in
for
container
image,
kate's
GCR
pause
in
the
positive
edge.
A
A
A
When
we
instantiate
a
pod
with
the
cubelet,
typically
speaking,
that
means
that
we're
going
to
end
up
with,
at
least
at
the
very
least
two
in
containers,
we're
gonna,
have
a
container
that
represents
the
infrastructure
container
and
the
container.
That
represents
the
container
that
you've
highlighted
in
your
pods
specification,
where
your
code
resides
or
the
container
image
that
we're
going
to
download
and
run.
A
That
there
is
a
pause
container
generally
running
and
if
we
go
back
to
our
manifest,
we
can
see
that
the
pause
container
used
by
default
is
this
case.
Khz
cRIO,
/,
pause,
3.1
and
I
want
to
highlight
what
that
does
right.
What
that
does
is
your
little
C
program
that
runs
at
its
own
container
and
it
will
actually
just
keep
that
container
alive
and
then
what
we
associate
with
that
container
is
a
set
of
namespaces
that
may
or
may
not
be
shared
with
the
other
Pods
inside
of
that
implementation.
A
So
when
we
spin
up
this
infrastructure
container,
it
is
to
that
infrastructure
container
that
we
would
mount
volumes
it
would.
It
would
be
to
that
infrastructure
container
that
we
would
mount
things
like
the
network
namespace
when
we
define
the
infra
infrastructure
container,
we're
going
to
associate
the
network
namespace
that
will
be
shared
across
all
of
the
pods
and
the
volumes
that
are
attached
to
all
of
the
pods
to
that
infrastructure
container,
and
then,
from
that
container
we
will.
A
A
A
A
A
A
Deployment
and
genetics
gonna
run
nginx
image
index
replicas
three:
you
can
get
pods,
oh
actually,
that
show
wide.
So
we
have
our
pot
back
on
kind
worker
and
it's
running
with
its
own
IP.
That's
ten
to
forty
four
are
overly
IP,
which
is
why
I
need
it
is
I
needed
to
see
that
so
that
we
understand
that
it's
its
own
networks.
So
now
we
could
do
our
CI
kettle
piece
when
it's
again,
we
need
also
do
PS
yeah.
A
A
A
A
A
Two
to
one
five
right,
so
cat
prop
Joe,
two
one:
five
command
line:
that'll
be
the
engine
X
process
and
that's
the
interesting
thing
right.
So
they
each
of
these
things
are
associated
with
the
same
network
namespace
and
the
one
that
got
it
first
was
the
paws
container.
Alright.
So
if
I
do
see,
RI
kettle
inspect
and
genetics
I.
A
A
A
A
Yeah
inspecting
this
container
gives
us
and
we
can
see
the
capabilities
that
it's
associated
with,
what's
bounding
and
effective
for
it,
what
things
are
inheritable
by
this
container
things
that
are
permitted
the
oom
score?
All
of
that
stuff
is
expressed
in
a
way
that
we
that
we
can
understand
just
by
looking
at
the
inspection
here,
all
right
that
was
pretty
far
into
the
weeds
I
don't
want
to
get
too
much
more
into
it.
A
A
A
We've
talked
about
where
to
find
information
about
how
its
configured
and
how
many
of
these
container
in
integrations
are
working.
The
container
runtime
container,
networking
container
storage.
All
of
those
things
we
talked
about
the
cubelet
api,
the
configuration
how
to
view
it
and
how
to
configure
it
with
the
metrics
are
that
are
exposed
by
the
cubelet
talked
about
quite
a
bit
about
theory
of
operation.
We
talked
about
stack,
pods
and
plague
and
I
think
it'd
give
us
a
pretty
good
coverage
of
the
cubelet.
A
All
right,
thank
you
very
much
tune
in
when
next
time
on
and
we'll
configure,
we'll
continue
our
exploration
and
into
the
q,
proxy
layer
and
the
canoe
controller
manager
and
the
scheduler
and
the
API
server
I'm
gonna,
see
if
I
can
bundle
some
of
these
things
up.
But
there's
a
lot
to
cover
here
and
I'm.
Realizing
I
really
feel.