►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody.
This
is
distribution
demo
for
the
15th
of
December
in
2022.
That
might
be
the
16th.
For
some
of
you
today
we're
going
to
look
at
something
interesting.
We
had
a
customer
report
of
federal
disk
actually
causing
some
pods
to
get
kicked.
A
What's
interesting
is
verbatim
from
their
kublet
logs
was
that
a
pod
got
kicked
or
using
less
than
one
Meg
of
ephemeral
disk,
but
that's
because
it
only
asked
for
zero,
so
their
their
cluster
happens
to
have
very
strict
behaviors
configured
right
and
we
actually
don't
ship
ephemeral,
requests
or
limits
out
of
the
box
as
a
part
of
the
chart.
It's
not
that
you
can't
configure
them.
That's
totally
doable.
Is
that
we
just
don't,
have
it
pre-built
as
defaults.
That's
part
of
the
chart.
A
A
Why
now
this
particular
customer
isn't
actually
having
the
Pod
fail,
because
the
Pod
is
failing,
it's
failing
because
the
Pod
didn't
ask
for
any
resources,
because
no
one
was
asked
for
it
got
zero
so
when
it
was
using
more
than
zero,
it
was
first
on
the
list.
Oh
right,
so
how
do
we
hunt
this
town?
What
do
we
look
at
in
figuring
out
how
much
is
actually
used
in
terms
of
ephemeral?
A
How
do
we
find
where
that's
coming
from
and
things
like
that?
So
the
first
question
for
everybody?
Is
you
know
what
is
ephemeral,
storage
in
case
like
what
is
really
what
is
it?
So,
if
I
do
a
quick
share
that
screen
here,
as
always,
if
it
starts
to
flick
or
tell
me,
because
you
know,
screen
sharing
We
Care,
specifically
about
the
ephemeral
resources,
we're
familiar
with
CPU
and
memory,
even
huge
Paces,
but
what
about
actual
ephemeral
storage
right
now?
A
A
A
Federal
Storage
is
effectively
whatever
you
would
do.
That
is
scratch
based
within
your
container
consumption,
if
you're
writing
anything
if
you're
doing
logs,
if
you're
changing
a
config
file,
if
you're
using
some
scratch
space
somewhere
right
now,
what
I
do
want
to
call
out
is
is
some
of
us
are
aware
that
we
make
use
of
empty
dirt,
but
memory
AKA,
Tempa
Fest
for
a
number
of
things
for
handling
data
in
between
the
init
containers
and
the
running
containers.
Those
do
not
count
as
disc
as
a
femoral
storage,
because
they're
in
memory.
B
A
But
they
don't
count
for
that.
They
legitimately
count
for
the
ephemeral
disk
legitimately
accounts
for
anything
that
you
would
write
that
doesn't
go
to
a
PVC.
It
doesn't
go
to
a
memory,
temp
FS
right,
so
the
most
common
offender
that
we're
going
to
know
about
is
Lux
logged
logs
logs
logs
logs
logs
right
biggest
complaint
that
we
have
is
we
write
log
files
and
then
we
have
to
turn
around
and
cap
them
back
out
with
Json
logging
with
gitlablogger
or
something
like
this.
A
That's
an
offender
and
we've
had
customers
ask
us
to
make
arrangements
to
make
that
work
and
we
have,
but
that
doesn't
account
for
what
this
customer
had
failed.
So
the
question
is:
how
do
it
actually
use
disk?
Why
did
it
use
that
disk
and
what
caused
it
to
push
across
the
limit
that
triggered
something
right
now?
A
What
matters
here
is
we
have
the
ability
to
set
resources
limits
and
resources
requests,
including
ephemeral,
storage,
all
right
our
chart
for
every
container.
We
already
have
a
resources,
that's
empty.
When
it
comes
to
the
map.
We
don't
have
a
hard
structure
set
in
this
Behavior,
so
you
can
actually
Supply
this
additional
configuration
for
any
of
our
charts.
You
just
have
to
know
that
you
can
do
it.
We
don't
have
it
in
our
documentation,
but
because
we
take
the
resources
map
and
directly
to
yaml
that
it'll
work.
A
So,
let's,
let's
have
a
look
at
what
this
actually
looks
like
in
terms
of
where
the
storage
gets
used,
and
how
can
we
manage
to
hunt
that
down
so
I'm
going
to
see
if
I
have
an
existing
deployment
right
now?
That
cluster
does
indeed
have
a
deployment
of
662,
which
is
good
all
right,
so
I'm
gonna
open
myself,
another
one
and
run
K9s.
A
A
A
Interesting:
okay,
web
services,
making
logs
right
so
control,
exact,
TI,
pod
C
web
service
and
we're
going
to
use
d
h
e
d:
u
now,
let's
say
hd1r
log
lab.
A
A
A
B
A
A
A
A
and
effectively.
What
you
end
up,
having
is
a
stack
of
things
together
and
then
the
last
one
is
what
is
the
difference
between
the
running
system
and
all
of
the
things
below
that
are
merged
together?
The
easy
way
to
deal
with
this
is
collect
a
few
pieces
of
information.
One.
You
need
to
know
the
node
that
that
particular
container
is
running
on,
because,
if
you
don't
know
the
node,
it's
really
hard
to
find
where
its
file
system
is
right.
So
I'm
gonna
grab
the
Pod
and
go
to
control.
Get
odd.
Oh
yeah,.
A
A
Not
Central
1B
I
need
to
learn
how
to
type
that
up
is
that
Dash
1B
I
know
I
just
read
it,
but
you
know
how
it
goes.
Is
one
dash
B
I'm
going
to
pop
into
this
note
now
this
is
running
Google's
container,
optimized
OS,
so
really
what
you
care
about
in
this
particular
case
is
literally
what
they
call
toolbox.
A
It's
in
their
bin
path
and
what
it
will
do
is
it
will
pull
down
and
mount
in
a
whole
bunch
of
stuff.
So
you
can
do
deep,
debugging
stuff,
install
packages.
Things
like
that,
so
I
can
actually
do
things
like
s,
trace
and
Hunt
around
the
file
system,
and
things
like
that
now
in
the
container
back
over
here,
I'm
going
to
go
to
my
home
directory
and
I'm,
going
to
make
myself
a
file
that
I
can
easily
find.
A
A
Don't
worry
about
fixing
that
later
so,
if
I
do
find
here
a
bunch
of
stuff,
let's
see
if
we
can
make
that
easier
to
read
I'm
gonna
do
du
Dash
HD2.
A
A
All
right
and
Etc
we
have
pki,
which
is
16k
and
gitlab,
which
is
not
a
lot
now.
If
we
look
up
here,
we
can
see
that's
our
CA
behaviors,
and
this
is
our
Etc
git
lab.
We
touched
anything
in
there
and
then
we
have
our
home
get
S3
config
bash
history.
You
know
bash
history
is
here
because
we've
run
commands
S3
configures
here,
because
we
generate
a
template
and
it
places
the
file
here
and
bogus
is
the
file
that
we
made.
A
A
Well,
that's
kind
of
the
nature
of
what
boot
snap
is.
It
kind
of
tells
us
also
why
you
know
took
39
seconds
plus
the
time
it
took
to
get
to
the
rails
console
and
then
there's
where
the
delay
came
from
between
it's
booted
and
here's
your
shell
right
now,
the
question
is:
what
is
the
difference
between.
A
A
A
A
Imf,
however,
going
to
take
that
flash
and
just
expand
that
a
little
bit,
because
we
know
that
the
path
we
care
about
is
somewhere
buried
under
the
overlay.
Two
driver
should
take
less
time
to
find
basically,
okay,
now
we
know
the
directory
we
want
to
get
into
so
now.
I
can
CD
to
that
diff
directory.
A
A
A
A
This
is
still
the
simplest
way.
I
know
how
to
tell
you
to
find
these
things
now.
I,
don't
know
exactly
where
this
is
actually
at
so
I'm,
going
to
do
a
type
f
name
on
that
and
I'm
going
to
do
it
on
the
root
directory
just
to
find
out
where
it
is
okay.
So
now,
instead
of
being
an
overlay
too
we're
now
actually
in
volumes.
A
A
A
A
And
then
all
the
rest
of
their
procs,
so
in
terms
of
what
our
container
we
expect
to
write,
I,
don't
really
see
anything
else,
consuming
a
big
chunk
of
memory
and
I,
don't
know
exactly
where
it's
coming
from
yet
now,
in
the
sake
of
time,
I'm
not
going
to
spend
too
much
more
time,
diving
into
that
particular
Behavior.
The
concept
that
I
wanted
to
convey
was:
how
can
you
hunt
this
down?
Where
is
your
consumption
coming
from?
A
How
can
you
find
it
in
its
real
usage
right
if
you're,
using
a
volume
or
you're
writing
accounts
to
a
volume,
whether
it's
ephemeral
or
a
PVC?
It
won't
show
up
in
that,
if
layer
from
the
overlay
driver,
if
you're
using
it
from
a
mounted
position
like
we
do
with
logs
it's
ephemeral,
but
it
shows
up
in
a
different
location,
because
it's
technically
a
volume
that
we
have
defined
so
the
technique
of
literally
write
some
file
go
on,
for
it
is
the
important
bit.
A
What
can
we
do
to
reduce
the
likelihood
that
it
does
it
right
that
one
we're
going
to
need
help
from
Engineers
throughout
the
company
to
figure
out
what's
going
on,
and
why
and
how
and
what
we
can
do
to
minimize
the
impact
when
it
comes
to
the
outputs
of
the
templates
on
the
file
system.
We
know
this.
We
expect
right,
there's
not
a
lot.
We
can
actually
do
right.
A
We
could
in
theory,
Mount
some
directories
and
Define
volumes
and,
if
you
know,
we'd
still
be
consuming
ephemeral
but
mounting
things
to
slash
Etc
and
basically
kissing
at
slides
our
goodbye.
It's
not
a
great
idea,
problematic
to
say
the
least,
and
there
are
proponents
out
there
in
the
in
the
wider
system.
That
will
say
well
just
never
write
anything
into
the
file
system.
A
Sorry
I
can
do
some
real,
real
expensive
engineering
to
make
it
look
like
I,
never
read
it
to
the
file
system,
but
I'm
going
to
end
up
writing
into
the
because
that's
where
the
application
loads,
the
data
from
and
I
don't
have
control
over
that,
and
it
would
take
a
significant
amount
of
engineering
to
rewrite
sidekick
to
read
its
configuration
from
a
different
location
same
with
rails.
So
can't
get
around
that
one
sure
a
couple
of
applications
we
have
could
theoretically
pull
almost
everything
from
the
environment.
A
What
exposing
everything
through
the
environment
is
not
necessarily
what
you
want
to
do
from
a
proper
behavior
I'm
aware
of
the
12
Factor
I'm,
also
conscious
of
the
fact
that
environment
variables
can
be
scoped
in
many
ways
and
having
to
get
into
the
file
system
of
the
running
container.
That
you're
degrouped
away
from
is
not
the
same
thing
as
having
to
just
hey:
can
I
get
the
environment
for
that
process,
depending
on
which
tools
and
run
times
and
appropriate
security
controls
and
other
things
that
are
in
play?
A
We
know
that
there's
going
to
be
template,
outputs
and
we
know
there's
going
to
be
logs
and
some
of
the
things
are
far
larger
offenders
than
others
when
it
comes
to
logs
or
generating
more
logs
than
needed
as
well
right.
A
Now
the
good
news
is,
it's
really
simple:
to
observe,
stand
it
up,
check
everything
after
a
couple
of
minutes
and
then
go
find
what's
generating
logs
or
other
disk
access
and
where
and
why
now
the
good
news
is.
We
know
where
logs
right
there,
that's
simple,
but
now
we
have
to
identify
which
pods
are
using
ephemeral
storage
just
because
they
booted
a
process
right
I.
A
Now,
for
the
sake
of
doing
that,
work,
we
may
be
able
to
most
easily
detect
that
by
actually
using
Docker
compose
and
on
our
individual
machines.
Writing
the
same
basic
commands
of
pop
into
it,
write
something
to
its
diff
layer
and
go
find
it
and
then
go
into
that
diff
layer
and
see
where
the
files
are
being
written.
A
A
A
The
logs
will
show
that
the
Pod
was
terminated
because
of
ephemeral,
disk
storage,
but
now
you're
going
to
have
to
figure
out
how
the
correlation
ID
resulted
in
the
origin
of
the
event
of
that
pod
being
killed
due
to
ephemeral,
disk
that'd,
be
a
fun
one
to
hunt
down
like
I.
Can
conceptually
trace
it,
but
good
luck
for
somebody
who
doesn't
know
what
to
look
for
because
of
the
environment
and
platform.
A
C
This
is
super
helpful,
Jason,
I
didn't
know
that
trick
for
jumping
around
the
file
system.
I
was
just
going
to
say,
I
think
you're,
a
snippet
to
catch
the
ephemeral
storage
for
a
pod.
It
looks
like
it
considers
the
whole
pod
and
we
were
only
popping
into
the
web
service
container.
I
just
did
a
test
where
I
jumped
into
Workhorse
and
surrogate
lab
was
huge
there
too.
So
that's
probably
it.
A
C
A
So
I
I
pulled
the
entire
pod
and
I
chose
something
that
I
knew
had
more
than
one
container
in
it,
because
exactly
there's
two
containers,
which
means
there's
two
file
systems
that
may
have
changes
written
to
them,
whether
they're,
logs
or
contents.
Now
Workhorse
doesn't
use
boot
snap,
but
it
does
things
right
and
that's
exactly
the
right
ball.
Thank
you
is
ephemeral.
Storage
is
an
item
in
the
API
that
takes
to
some
storage
consumption
by
everything
in
that
pod.
A
So
you
may
have
to
look
and
see.
Is
there
more
than
one
container
here?
Did
a
sidecar
get
added
to
the
Pod?
Did
an
extra
container
get
added
to
the
Pod?
Are
they
consuming
things
right?
We
can
set
the
resource
requests
on
our
containers,
but
if
somebody
say
uses
istio
and
istio
for
some
reason
starts
consuming
ephemeral,
storage,
that's
outside
of
our
control,
but
the
best
we
can
do
is
provide
hey,
go
look
for
this.
B
That
brings
up
a
question
for
me
when,
when
other
containers
complete,
will
the
no
garbage
collect
the
diffs
when
the,
if
the
container,
when
the
containers
exit
so
other
containers
in
the
pod.
A
A
A
C
A
A
Okay,
well,
if
anybody
else
has
any
other
questions,
raise
them
now
or
I'm
gonna
say:
that's
some
crickets.