►
Description
A show that features the people and technology that make Red Hat® Enterprise Linux® into the world’s leading enterprise Linux platform.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
handling
from
welcome
to
another
edition
of
red
hat
enterprise,
linux
presents-
I
am
here
with
the
one
and
only
scott
mcbryan.
We
were
just
talking
about
what
we're
gonna
do
on
the
show
today,
scott,
and
it
seems
like
a
lot
of
fun
to
me,
but
I
am
an
old-school.
Linux
is
admin
so.
B
Well-
and
I
mean
honestly,
we
talk
a
lot
about
things
like
performance
and
tuning
and
there's
been
a
lot
of
technologies
that
have
come
into
our
lexicon
for
doing
those
things
yeah
and
like
what
three
months
ago,
I
think
we
had
carl
abaddon
who's
the
experienced
product
manager
for
for
operating
system
performance
for
rel,
and
we
talked
about
performance,
co-pilot
and
grafana
and
visualizations
and
that's
all
great
and
in
fact,
like
the
next
version
row
will
have
even
more
cool
like
that.
But
nice.
B
You
know
when
I
start
looking
at
what
what's
out
there,
that's
not
what
people
find
right
so
like
visualizations
and
and
those
types
of
things
are
really
good
for
stuff
like
troubleshooting
right.
But
let's
say
that,
like
you,
are
a
database
server,
there's
stuff
that
you
should
be
doing
to
that
to
adjust
the
system
parameters
to
optimize
it
for
that
database
workload
and
so
like
visualizations
and
and
data
collection
and
those
types
of
things.
B
Don't
help
you
with
that,
and
so
I
thought
we'd
start
today
by
just
kind
of
doing
a
quick
round
of
the
contents
of
slash
proc
yeah
and
then,
when
we
were
talking
before
the
show.
I
found
a
couple
of
tuning
guides
specifically
for
database
things,
one
that
I
thought
was
really
good
for
for
from
microsoft,
sql
server
running
on
linux
and
another
one.
That
is
like
less
good,
and
we
could
talk
about
like
why.
I
think
it's
less
good
for
just
kind
of
general
database
server.
But
I.
A
B
Right
out
of
the
box
right,
like
yeah,
well
like
when
you
turn
those
knobs,
and
this
is
the
thing
that
people
don't
understand
so
like
read
the
article
and
be
like
okay
I'll
I'll
put
these
in
my
2g
bam
there
we
go
cool,
optimized,
you're,
actually
making
choices
that
make
this
system
less
good.
For
other
things,
that's.
B
Well,
and
also
things
like
a
lot
of
the
tunables
will
do
things
like
shove
as
much
as
they
can
in
memory
so
like
extending
the
file
system,
synchronization
values
right,
so
that
your
buffered
rights
stay
buffered
longer,
so
that,
if
you
need
to
refer
to
that
data,
it's
already
in
ram.
B
B
Right
or
someplace
that,
where
like
there
may
be
more
likely
to
be
power
outages
if
you
lose
power
on
that
box
or
it's
being
run
by
less
trained
staff
right,
so
their
solution
is
pull
the
plug
plug
it
back
in
all
of
a
sudden,
all
those
disk
writes
that
you
needed
to
write
out
to
disk
yeah
and
then
what
happens
like
was
that
data
really
critical,
in
which
case
it's
gone
and
now
you're
losing
critical
data.
B
B
So
maybe
I
use
some
of
these
things,
but
not
all
these
things,
and
that's
that's
something
that
I
think
that
there's
still
a
lot
of
need
for
in
our
industry
anyway.
So
let
me
let
me
just
pull
up
my
ssh
session
here.
We
can.
B
Feel
yeah
at
one
point
I
was
a
brand
certified
architect,
but
that
has
also
kind
of
whittled
down
over
the
years.
B
Right
right,
all
right,
so
this
is
I'm
just
ssh
into
a
box
and
we're
currently
in
slash
proc,
and
this
is
what
it
looks
like
yeah
and
you'll
recall
that
proc
is
really
a
virtual
file
system
yep.
So
it's
not
stored
on
disk
we're
actually
looking
at
space
within
the
kernel's
memory,
and
the
first
thing
I
noticed
about
this
is
all
those
numeric
directories.
What.
B
Yeah
exactly
and
so
let's
just
look
at
one
of
them
here.
B
That
is
the
oh.
This
guy
right
here
is
broken
and
that's
what
it's
complaining
about
that
normally
is
a
symbolic
link
to
the
executable
that
was
used
to
create
process
id
610
right
but
process
id
610
is
apparently
this
kernel
thread
yeah
and
that's
why
it
doesn't
have
an
executable
because
it's
not
executed
from
the
file
system.
That's
part
of
the
kernel.
So
maybe
I
let's
look
at
a
different
one.
How
about.
B
This
is
a
tool
for
a
good
own,
desktop
running
on
this
box,
right:
okay,
cool,
okay,
so
we've
seen
a
lot
of
the
information.
That's
in
this
directory
elsewhere,
like
like
the
executable
name
like
the.
B
Maybe
command
if
we
look
at
the
command
right.
B
And
I
want
to
list
that
or
cap
that,
so
this
is
the
actual
thing
that
was
executed
on
the
command
line
to
make
it
the
auxv.
C
C
B
Oh,
but
it's
stored
in
memory
and
so
yeah
as
much
but
some
other
interesting
things,
because,
like
a
lot
of
this
data,
we
end
up
seeing
other
places
like
ps
all
right.
So.
B
B
So,
do
you
know
what
the
score
is
used.
A
By
how
to
that's
the
one
killer,
I
would
assume
right
out
of
this
right
management
tool
per
system.
Whatever
you
want
to
call.
B
It
yeah
so
when
your
system
is
out
of
memory,
the
out
of
memory,
killer,
killer,
fires
up
and
starts
whacking
things,
and
it
used
to
be
a.
C
B
B
Right
and
lo
and
behold
that
was
not
the
best
way
to
make
decisions
so
shocking
right,
so
they
came
up
with
the
scoring
method,
and
so
now
moonkiller
kills
the
thing
with
the
lowest
score
there
you
go
and
we'll
like
work
its
way
up
until
it
it
has
enough
memory
to
continue,
but
so
on
this
box
on
this
process
here,
6131
boom
score
is
zero
right
it
and
if
we
look
across
all
the
processes,
there's
probably
a
whole
bunch
of
score
zero,
so
they're
all
equally
likely
to
be
killed,
but.
B
But
you
can
use
the
adjust
to
change
the
process
zoom
score
and
thereby
make
it
less
likely
to
get
killed
by
oomkiller
and
so
in
a
production
environment.
B
That
would
check
to
see
what
the
process
for
sshd
was,
and
then
you
just
like
make
that
score,
really
big,
so
that
if
the
box
had
problems,
it
would
kill
your
database
or
kill
your
like
web
application
servers
or
whatever,
but
ssh
would
still
be
running,
which
means
that
as
a
remote
admin,
you
didn't
have
to
drive
to
the
data
center
to
do
the
needful
on
this
box,
like
you,
could
still
set
it
at
home
at
four
in
the
morning
and
connect
to
it
and
and
fix
it.
B
B
Yes,
namespaces,
yes,
and
no,
I
wasn't
planning
on
talking
about
them,
but
follow
scripter
is
basically
like
the
files
that
this
process
has
open
right.
So
you
would
get
that
data
from
things
like
lsof
to
tell.
B
A
B
A
B
Do
right
the
the
lower
the
score,
the
more
likely
it
will
be
killed
in
an
out-of-memory
situation
in
the
box,
the
higher
the
score,
the
less
likely
it
is
to
be
killed,
and
I
know
that
the
the
inclination
would
be
like.
Oh,
this
is
a
database
box.
I
should
make
the
database
super
resilient
killing.
No,
no,
that's
not
what
you
want.
B
B
Modules
right,
so
these
are
all
of
the
kernel
modules
that
are
loaded
into
the
running
kernel
right
and
we
would
normally
access
this
data
with
something
like
ls1
right
to
show
it
to
you
and
so
you're,
looking
at
kind
of
the
same
data
just
formatted
more
nicely.
So,
for
example,
this
guy
right
here,
the
intel
gdt
module
is
loaded.
This
is
how
much
memory
it's
using.
B
A
B
In
in
the
ls
mod,
it
actually
doesn't
show
us
that.
B
And
we
can
also
look
at
stuff
like
actually
I
use
this
one
all
the
time
partitions.
So
these
are
all
of
the
file
system.
Sorry
not
file
system,
all
of
the
device
ids
of
disk
devices.
So
we
can
see
that
on
this
machine,
I've
got
an
nvme
drive
and
that
has
two
physical
partitions
on
it.
And
then
I've
got
three
device:
mapper
devices
for
my
logical
volume
configuration,
and
so
when
I
like,
plug
a
usb
thumb
drive
into
it.
I
want
to
know
what
device
that
is
well
I'll.
B
Just
take
another
look
at
park,
partitions
and
it'll
say
like
sda
or
sdv
or
whatever
it
is,
and
then
I
know
what
what
the
devices
that
I
just
plugged
in.
I
can
reformat
it
or
whatever
and
not
have
to
worry
about
destroying
my
entire
box,
and
we
would
get
this
maybe
from
something
like.
B
Like
that,
maybe
right-
and
so
here
it's
showing
us-
that's
our
nvme
drive
and
then
here
are
the
two
physical
partitions
and
then
these
are
the
three
logical
volume
managed
file
systems
and
we
get
more
data
and
it's
organized
a
little
bit
more
humanly
right.
But
if
you
just
want
something
quick.
B
Let's
write
less
data,
but
it's
live
updated
whenever
something
changes
in
the
kernel
right,
all
right
so
again,
like
we
used
a
lot
of
this
data
through
other
command
interfaces,
but
then
there's
like
super
dupe
stuff,
for
example,.
B
B
We're
looking
oops
over
here,
we're
looking
at
the
actual
thing
that
goes
with
this
device
and
you'll
see
that
there's
like
three
four
columns
for
cpu,
because
this
is
a
quad
core
box
and
the
numbers
underneath
each
of
those
each
of
those
columns
is
how
many
times
this
cpu
is
handled
an
interrupt
to
this
device
and
a
lot
of
times
it
will
be
either
a
cpu
gets
assigned
management
of
this
interrupt.
So
anything
that
gets
sent
to
this
device
or
interrupted
occurs
on
this
device.
B
A
specific
cpu
handles-
and
you
can
see
that
that's
the
case
here
with
interrupt
nine
right.
The
other
cpus
aren't
handling
interrupt
nine,
but
in
other
cases
like
this
one,
it
gets
kind
of
spread
across
the
cpus.
So
the
handling
for
this
interrupt
is
more
equally
handled
and
this
one
happens
to
be
for
the
ethernet
card.
B
And
so
again,
this
is
like
troubleshooting
type
of
data
where
you're
interested
to
see.
If
something
is
going
crazy
or
you
know,
is
there
one
of
these
that
has
like
a
really
high
number
associated
with
it.
So
you
can
see
like
my
system.
Performance
is
degraded,
but
look
I'm
getting
like
a
lot
of
interrupts
on
my
ethernet
device
or
my
graphics
card
or
my
wi-fi
interface,
and
so
you
can
kind
of
get
a
little
bit
data
on
what's
happening.
There.
B
So
organizationally
or
like
what
they
are
is
not
different.
They're,
both
in-memory
file
systems
presented
to
you
by
the
kernel.
B
The
difference
between
them
is
what
information
goes
there,
and
so,
if
we
look
at
cis,
cis
is
more
organized
around
the
devices
attached
to
the
kernel
or
to
the
machine
rather
right,
and
so
this
is
where
you'd
go
to
like
set
the
scheduling
algorithm
on
a
specific
block
device
right.
So
I
would
say
that
cis
is
more
for
interacting
with
system
connected
devices,
whereas
proc
is
for
system
information
and
then
in
just
a
bit
we'll
start
talking
about
the
tunables,
but
those
tunables
apply
system-wide.
B
B
Info
is
like
all
this
memory
information
and
we
usually
interact
with
things
like
dm
stat
or
free
or
maybe
top,
and
we'll
actually
show
you
some
of
this
information.
But
there's
a
lot
more
in
here
than
what
those
tools
are
showing
you
so,
for
example,
near
huge
page
allotments.
These
are
typically
not
shown
by
a
lot
of
the
other
memory
reporting
applications
and
if
you're
not
familiar
a
huge
page,
is
a
hunk
of
memory
that
is
larger
in
size
than
the
normal
pay
size
on
the
system.
A
A
Oh
yeah,
that's
cool
yeah,
so
so
what's
commit
limit
just
for
everybody
out
there.
B
Okay,
so
we
often
operate
in
memory
over
commitment
mode
on
linux
and
the
reason
that
we
do.
That
is
because,
when
we
start
up
a
process,
it
often
will
share
a
lot
of
its
overhead
with
other
processes.
So,
for
example,
if
you're
running
a
web
server
and
you've
got
30,
apache,
threads
or
nginx
instances
running
each
one
of
those
loads,
shared
memory
like
shared
libraries
and
that
kind
of
thing.
Well,
how
many
of
those
do
you
really
need
loaded?
Do
you
need
one
instance
of
that
library
for
every
single
thread
that
you
have
going?
B
No,
you
have
one
that
everybody
kind
of
refers
to,
but
the
process
itself
doesn't
know
that
it's
sharing
that
library
with
other
processes,
and
so
we
because
we're
sharing
memory-
and
we
don't
report
that
at
the
process
level,
we
have
to
have
another
way
of
tracking
how
much
memory
we've
committed
to
delivering
to
processes
and
how
much
we're
actually
delivering
to
the
processes.
B
So
the
commit
limit
is
how
much
I
will
allow
processes
to
ask
for
across
the
entirety
of
the
system.
So
this
box,
I
think,
has
eight
gig
of
ram
and
you
can
see
that
we're
committed
our
commit
limit
is
like
almost
12
gig
of
ram
okay.
So
normally
you
see
us
commit
to
about
50
more
memory
than
we've
got
committed,
as
is
how
much
we're
currently
committed
to
delivery.
B
So
all
the
processes
on
the
machine
they
are
currently
using
up
eight
gig
of
memory
and
that's
their
reported
size
right.
So
that
includes
things
like
shared
libraries
that
might
be
sharing
with
other
people,
but
they
think
that
they've
got
eight
gig
of
memory
consumed.
B
A
B
B
So
status
gives
us
like
more
specific
memory
utilization
right
down
here
in,
like
vm
data,
vm
stack
vmaxe,
that's
telling
me
about
what
this
process
is
consuming
in
different
types
of
memory.
What
I
was
looking
for
was
a
maybe
it's
one
of
the
map
files.
B
It
there
you
go
okay,
so
this
is
actually
telling
me
the
hex
memory
id
and
what
is
stored
there
and
every
process.
B
It
turns
out
it's
given
the
same
memory
table,
so
it
thinks
that
it's
memory
starts
at
like
all
zeros
and
goes
until
some
other
hex
number.
But
if
you
looked
at
every
single
process,
every
single
process
starts
at
that
all
zero
memory
address,
but
they
can't
all
start
there,
because
that
would
mean
that
they're
all
stored
in
the
same
actual
physical
ram,
and
so
what
you
may
have
noticed
if
you
do
s
trace
and
look
at
the
system
calls
that
are
being
made
by
applications.
B
A
Cool,
so
there
is
a
question
from
mr
rapscallion
reeves,
according
to
the
gentoo
handbook,
so
take
that
with
a
grain
of
salt,
special
care
needs
to
be
made
when
mounting
or
special
gear
needs
to
be
taken
when
mounting
proc
and
sys
directories.
Obviously
they
suggest
using
the
dash
dash
r
bind
and
dash
dash
make
our
slave
options.
Could
you
speak
on
that?
Maybe
why
they're
needed?
I
don't
know
to
be
honest,.
B
With
you
yeah,
I
have
not
seen
those
options.
I
know
that
sys
and
proc
are
automatically
mounted
into
the
rel
file
system
right.
So
I
don't,
let's
see
here.
A
B
So
unless
they
are,
unless
they
are
implied
options,
meaning
they're
used
all
the
time,
I
don't
see
them
being
called
out
in
how
we're
performing
them
out
here.
So
I
don't
know
without
having
the
article.
I
couldn't
tell
you
more
about
why
that
is
that
is
suggested
by
gender.
B
A
Sorry
that
was
for
when
mounting
the
base,
croc
and
cyst
directories
into
a
true,
maybe
it's
just
because
those
directories
are
needed
for
the
host
and
troop,
probably
I
mean
this
is
all
done
automatically
right
like
this.
Is
the
value
add
of
rel
like
you're,
not
building
it?
You
know
basically
from
scratch.
B
Here
and
you're
right,
it
may
be
a
because
it's
a
root,
then
it
is
needed
by
both
the
running
operating
system,
but
also
your
intruded
environment.
B
A
B
So
I
wanted
to
go
like
super
deep
for
just
a
second.
I
know
that
mmu
was
like
also
deep,
all
right,
so
there's
so
much
data
here
and
a
lot
of
it
like
we
get
through
other
tools,
but
then
there's
like
a
bunch
of
just
like
random
how
the
operating
system
works,
stuff
and
buddy
info
as
an
example,
one
of
those
so
buddy
info
is
broken
up
into
the
different
zones
of
memory.
B
So
like
and
remember,
if
I
remember
correctly
over
here-
these
are
the
pages
that
are
individual.
Where
there's
no
adjacent
free
page
of
memory,
I
may
have
that
backwards,
because.
B
On
this
table
and
the
next
column
over
is
like
a
group
of
two
pages
next
to
each
other
and
then
the
next
column
over
would
be
a
group
of
four
pages
next
to
each
other,
and
so
each
one
of
these
increases
by
a
power
of
two,
and
it
shows
you
the
continuous
blocks
of
free
memory
that
are
out
there
available,
and
I
actually
do
think
I
am
opposite.
B
So
no
single
individual
pages,
but
lots
of
like
hunks
of
memory,
whereas
down
here
in
dma,
32,
there's
a
whole
bunch
of
individual
pages,
but
very
few
giant
hunks
of
buddies
together
and
same
thing
in
zone
normal,
and
so
like
do
you
need
to
know
this
information?
Probably
not,
but
if
you're
running
on
a
system-
and
it's
been
up
for
a
really
long
time-
you
could
use
something
like
this
to
ask
yourself.
The
question:
am
I
dealing
with
fragmented
memory
right?
B
So
if
you
have
like
groups
of
contiguous
free
buddies,
then
you
have
groups
of
contiguous
memory,
you're
not
fragmented.
But
if
you
have
like
onesies
in
each
one
of
these
categories,
then
that
tells
you
that
you
have
very
limited
amount
of
memory
in
each
of
these
groups
of
continuous
blocks
of
memory
or,
if
you
had,
like
all
single
page,
reported
buddies
right
where
they're
just
one
page
standing
there
by
themselves.
B
B
True
enough,
all
right,
so
that
that's
like
a
lot
on
proc,
probably
more
than
we
wanted
to
go
into
a
proc,
but
where
I
wanted
to
go
with
this
is
like
we're
used
to
seeing
this
information,
but
there's
more
stuff
there.
B
If
you're
really
interested
in
down
and
dirty
information,
you
can
get
even
more
okay,
but
the
other
thing
that
we
often
do
when
interacting
with
proc
is
make
changes
to
it,
and
at
this
level
there
are
very
few
files
that
can
be
changed
right.
You
can't.
C
B
Right,
but
there
are
a
couple
places
where
you
can
make
changes.
We
saw
one
of
those
earlier
when
we
were
messing
around
with
loom
adjust,
and
that
was
changing
score
right.
So
that
was
a
an
example.
Those
changes
are
not
persistent,
because
everything
in
this
directory
is
in
memory.
So,
if
you
read
with
the
machine,
it
goes
back
to
whatever
the
value
was
originally.
B
And
so
one
of
the
places
where
we
do
often
see
changes
is
process.
B
That
is
the
older
method
of
making
these
changes.
Persistent
the
newer
method
is
2d,
so
2d
profiles
will
actually
adjust
data
here
in
process.
It
can
also
adjust
things
in
slash
cis,
so
sony
was
asking
earlier
about
the
difference
between
the
two
file
systems.
Tunde
can
actually
handle
both
sys
ctl
only
does
process
right
all
right.
B
A
I'm
going
to
but
there's
a
question
that
I
need
to
ask.
B
B
Sure
why
don't
we
make
that
one
of
our
first
things
that
we
mess
around
with
in
process?
So
let's
hold
this
like
database
thing
for
a
second
okay
and
talk
about
the
mechanics
of
making
changes
to
process
all
right.
So
if
I
remember
correctly,
drop
caches
is
in
vm,
because
it's
a
virtual
memory
thing.
B
B
But
how
do
you
know
what
should
go
in
here
because,
like
for
example,
I
know
that
swappiness
here
is
a
value
between
0
and
100,
and
it
sets
the
kernel's
affinity
for
using
swap
space,
whereas
over
commit
memory
is
0,
1
or
2..
I
will
not
over
commit
memory.
B
And
then
it's
broken
down
by
that
top
level
directory
hierarchy
underneath
proxis,
so
for
looking
at
drop
caches,
which
is
in
proxies
vm.
I'm
going
to
take
a
look
at
em.txt
and
I'm
going
to
look
for
drop
connections
all
right.
So
that's
in
the
list
of
tunables
covered
by
this
document
and
then,
when
I
look
down
into
it,
it
says
this
is
going
to
cause
the
kernel
to
drop
caches,
but
depending
on
what
number
you
shove
into
it,
tells
it
what
caches
you're
interested
in
dropping.
B
B
This
this
is
going
to
drop
all
of
the
cached
memory
that
is
currently
in
use
by
the
kernel.
Now
let
me
take
a
look
here
free,
so
you
see
this
buffer
cache.
So
if
I
do
an
echo
three
into
drop
caches.
B
Right
so
the
reason
that
we
cache
data,
in
fact
you'll
notice,
that
when
you
look
at
free
memory
using
the
free
command
that
the
lettuce
kernel
almost
always
like,
gobbles
up
whatever
it
can
in
cached
memory.
So,
like
you
see
the
used
and
you'll
see
a
very
small
free
and
you'll
see
a
whole
bunch
of
stuff
over
in
cash,
and
you
saw
that
originally
right.
I
have
about
eight
gigs
of
total
memory.
I
was
using
about
two.
B
B
B
Right
so
if,
if
you're
using
memory-
and
you
need
to
allocate
more
than
you
have
in
free,
the
kernel
will
automatically
deallocate
some
of
the
cache
to
memory
and
then
allocate
it
back
to
the
process.
That's
requesting
it.
B
B
I
may
not
have
noticed
that
it
has
sped
up,
but
it
actually
was
sped
up
because
it
was
pulling
that
information
out
of
cache
instead
of
actually
having
to
go
and
spin
the
disk
and
or
in
my
case,
look
in
the
memory
register
on
the
nvme
to
figure
out
what
entries
are
in
that
directory
and
then
here
we
were
over
here
in
this,
and
I
looked
in
the
vm
text
and
it's
like
oh
wait,
the
the
tunable
wasn't
in
there.
Well,
let
me
check
here.
B
B
It
is
now
hitting
the
cache
instead
of
pulling
that
up
from
disk,
and
so
that's
the
kind
of
stuff
that
we're
storing
in
cache
and
the
same
thing
for
things
like
application
stuff
when
applications
open
files
or
applications
write
files.
What's
the
likelihood
that
it's
going
to
access
that
same
file
again,
so
we'll
keep
it
in
cash
to
try
and
speed
those
accesses
up?
B
So
when
you
drop
cache,
you're
ditching,
all
of
those
saved
bits
of
data
and
then
the
next
time
we
need
to
open
that
file,
we
actually
have
to
spin
the
disk
up,
navigate
the
directory
structure,
find
it
make
sure
permissions
are
correct,
allocate
it
and
show
it
to
you.
So
that's
that's.
What
you're
doing
by
dropping
the
cache
you're
just
making
us
do
all
of
those
activities
on
the
native
operating
system,
instead
of
being
able
to
review
the
data
that
we've
already
loaded
once
in
the
memory
for
something
we
used,
it.
A
So
here's
a
question
for
a
large
and
there's
multiple
ways
to
say
this
m
alok
or
malik:
does
the
os
look
into
free
or
does
it
include
cache
two?
Wouldn't
that
prevent
apps
from
initiating
itself,
because
malik
doesn't
have
enough
free
memory.
B
B
If
there's
not
enough
free
memory
to
service
it
and
free,
then
it
looks
to
see
whether,
if
we
combine
some
stuff
from
free
and
cash,
if
that
would
be
enough
memory
and
if
so
then,
the
kernel
embarks
on
the
journey
of
like
removing
data
from
cash
flushing
data
out
of
cash.
Returning
that
memory
to
the
free
and
then
performing
the
malik
request
that
was
made
right
so.
B
Well,
not
just
that,
like
they
ask
for
things
like
shared
memory
or
shared
libraries,
which
we've
already
got
loaded,
but
they
account
for
that
in
their
malik
requests
and
so
there's
a
whole
bunch
of
things
on
like
how
we
over
commit
on
ram
all
the
time.
B
In
fact,
that's
one
of
the
tunables
where,
in
the
virtual
memory
terribles
over
commits
so
over,
commit
memory
is
a
like
zero
one.
Two
over
commit
ratio,
as
if
you're
in
over
commit
mode.
How
much
will
you
over
commit
to
right?
So,
let's
just
jump
down
to
those.
B
All
right
so
when
it's
zero,
the
kernel
attempts
to
estimate
the
amount
of
free
memory
left
on
user
space
when
it
allocates
more
memory
when
it's
one,
the
kernel
pretends,
there's
always
enough
memory,
it'll
never
run
out.
So
whatever
is
requested,
that's
what
it's
going
to
give
as
an
address
space
right
realize
the
address
space
doesn't
actually
equate
to
real
ram
used
by
the
process
and
then
two
is
never
over
commit
so
currently
on
this
system.
By
default,
we
use
setting
zero.
B
We
guess
and
we'll
overcome
that
up
to
a
certain
amount
right.
B
There
are
certain
very
conservative
entities
that
don't
want
to
get
in
a
situation
where
they
need
the
memory
and
because
they're
out
of
memory
boom
killer,
starts
and
starts
killing
off
their
processes.
Because
that's
what,
if
you're
over
committing
memory?
That's
what
could
potentially
be
the
case.
And
so
there
are
certain
uses
where
the
architects
of
those
applications
have
said
we're
never
going
to
over
commit
memory,
because
I
don't
want
to
risk
having
loom
killer
startup,
to
kill
off
processes
on
my
box
right.
A
B
Not
there
it's
swappiness,
because
I
think
a
lot
of
people
don't
know
what
swappiness
actually
sets.
B
B
Yes,
yeah,
yes,
and
in
fact
it's
it's,
I
think
one
of
the
most
commonly
tuned
things
yeah
all
right.
So
it's
a
value
between
0
and
100.,
which
sets
the
kernel's
affinity
for
utilizing
swap
space.
So
it
will
always
utilize
swap
space
right.
It's
just
how
aggressive
should
it
be
at
filling
up
that
swap
space,
so
at
100
it
should
try
to
swap
whenever
it
can,
whatever
data
it
can
right
and
at
zero.
It
should
try
to
never
swap
as
much
as
possible,
but
it
will
still
swap
and
there's
actually
been.
B
It
was
either
rel,
six
or
well
seven.
There
was
a
change
to
swap
where
zero
does
not
disable
it
and
that's
that's
a
misunderstanding.
A
lot
people.
B
B
B
B
The
the
colonel
actually
got
to
commit-
I
can't
remember,
was
rail,
six
or
all
seven
time
frame
where,
specifically,
it
addresses
the
value
of
zero
for
swappiness,
where
it
says
up
until
this
amount
of
memory
is
left,
it's
actually
like
a
number
in
bytes.
B
The
kernel
won't
swap,
however,
that
number
is
very
small,
and
so
it
it
was
put
in
there
to
effectively
disable
swap
space.
But
that's
not
what
this
parameter
does
this
parameter
sets
your
set,
your
affinity
for
using
it.
So
really
it's
like
setting
a
rule
that
says
until
you've
gotten
to
this,
like
really
tiny
amount
of
free
ram.
That's
left,
don't
swap
don't,
but
once
you
hit
this
really
tiny
amount,
then
swap
away
right.
B
The
other
thing
is
like:
oh
such
a
zero.
The
kernel
will
almost
never
swap
okay.
That's
that's
pretty
true,
however.
Now
because
of
that
change,
it
doesn't
swap
up
until
this,
like
very
tiny
sliver
of
rain
is
left,
and
there
are
now
cases
where,
if
memory
usage
is
going
up
a
lot,
then
what
will
happen
is
the
kernel
will
realize
that
it
needs
to
start
swapping,
because
it's
crossed
that
threshold.
B
However,
all
the
memory
is
now
used,
which
means
there's
no
memory
left
to
actually
do
swapping
right,
so
it
works
great
in
places
where,
like
memory
usage
is
consistent
or
smaller,
allocations
of
memory
are
what's
happening
on
the
machine
and
the
places
where
it's
like
disaster
world
is
java
applications
or
sometimes
enterprise
database
applications,
because
when
they
allocate
stuff,
they
allocate
huge
amounts
of
memory
at
a
time,
and
so
all
of
a
sudden
you
cross
over
that
threshold,
where
you
need
to
be
like.
B
Oh,
I
need
to
swap
but
you've
crossed
over
it,
because
you
just
allocated
the
last
bit
of
ram
that
you
had
on
the
system,
so
you're
done
and
the
system
will
essentially
like
hang
it's.
I
think
those
use
cases
are
extraordinarily
rare,
but
what
you'll
see
is
when
someone
recommends
the
youtube
swappiness
and
they
want
you
to
be
really
conservative
with
swappiness.
B
They
will
now
have
you
set
it
to
one
instead
of
having
you
set
it
to
zero,
because
at
swapping
this
one
you're
equally
unlikely
to
utilize
swab's,
face
pretty
darn
close
to
equally
unlikely
using
smiley
face.
But
there's
not
this
like
artificial
boundary
at
which
you
can
cross
to
utilize,
swap
space
so
that
boundary
limitation
is
removed,
so
you're
unlikely
to
use
it.
Unless
you're
very
memory
constrained,
there's
no
actual
listed
amount
of
memory
that
you
have
to
have
in
order
to
start
using
swap.
A
All
right
there's
a
ton
of
questions
here:
okay,
there's
two
of
them
all
right
from
the
same
person.
I
shouldn't
say
sorry:
how
does
overcommit
affect
swappiness
if
we
set
swappiness
to
a
high
value
and
have
set
the
over
commit?
Would
it
fail
to
swap
because,
theoretically,
all
the
memory
could
be
committed.
B
Okay,
so
overcommitment
of
memory
means
that
processes
ask
for
more
than
they're
going
to
use
right
right.
So
now
we're
not
actually
using
that
memory.
It's
just.
We
told
the
process
that
if
it
wanted
it
because
I
asked
for
it,
it
had
it,
and
so
it's
like
I'll
explain
over
commitment
like
this
when
you're
when
you're
dealing
with
a
child,
it's
like
daddy,
I
want
to
go
to
disney
world.
B
One
day,
one
day
we'll
go
to
disney
world
right.
Did
you
actually
commit
to
a
date
on
which
you
are
attending
disney
world?
No,
but
you've
told
them
that
one
day
that'll
happen,
and
so
at
some
point
in
the
future
they're
going
to
be
like.
Are
we
going
to
disney
world
now
we're
doing
this
now
right
and
then
you
can
decide
whether
you're
going
to
do
it
or
not
going
to
do
it
and
over
committing
memory
is
the
same
right.
B
B
37
million
gigs
of
ram
there
you
go
process
right
and
really.
The
process
then
uses
like
the
first
400k
right
and
that's
that's
what
you're
actually
committed
as
yeah
and
at
some
point
in
the
future
when
that
process
goes,
I
need
my
other
400
gig
of
ram
blam,
400
million
gig
of
ram
and
it
actually
tries
to
store
stuff
there.
That's
when
you
actually
have
to
deal
with
that
over
commitment
right.
So,
okay,
so
I
don't
think
there's
a
a
relationship
between
over
commitment
and
swappiness,
because
swapping
is
actual
memory.
B
That's
been
stored,
is
now
being
paged
out
to
the
swap
spaces
and
there's
actually
a
very
specific
list
of
things
that
will
be
eligible
for
swap
so
things
like
anonymous
process.
Data
is
eligible.
Wait,
yes,
is
eligible
for
swap,
but
things
like
shared
shared
libraries
are
not
allowable
for
swamp
because
other
things
might
be
using
it
too.
B
So
let's
take
a
look
at
this
microsoft
guide
here.
Let
me
grab.
C
A
A
A
B
So
I
think
that
microsoft
did
a
pretty
good
job
on
this
article
about
how
to
configure
linux
and
starts
talking
about
things
like
software,
which
you
may
or
may
not
be
using
partitioning
recommendations,
battle
system
tuning,
but
the
stuff
that
we're
interested
in
talking
about
proc
today
is
down
a
little
bit
further,
where
it
starts
talking
about
okay
in
your
tunde
profile.
This
is
what
you
want
to
have
in
there,
and
specifically,
all
this
stuff
is
in
process.
B
A
B
Yeah
it
granted
so
what
they
want
to
do
is
they
want
to
avoid
swappiness
as
much
as
as
possible.
That's
why
they're
saying
it
to
have
the
affinity
very
low
for
utilizing
it,
because
a
lot
of
the
data
that
is
stored
by
the
sql
processes
is
cached
data
which
is
stored
in
anonymous
process
data
which
normally
would
be
eligible
for
swapping,
and
so,
if
you
swapped
that
data
out
now,
the
database
process
is
trying
to
hit
its
cache
thinking,
it's
going
to
be
fast.
A
B
Sap
hana,
where
that's
absolutely
true,
all
the
database
is
in
memory
for
sap,
hana
yeah
for
others
like
sql
server,
that's
not
always
the
case,
but
they
prefer
to
have
that
be
the
case
yeah
and
then
there's
sometimes
where
that's
just
not
possible
right.
You
have
this
enormous
database
on
disk,
but
you
want
the
parts
that
you
use
a
lot
to
be
in
memory.
B
B
All
right
so
the
other
parameters
they're
changing
here,
so
here
the
dirty
vm.dirty,
whatever
they're,
making
changes
to
how
we
flush
disk
rights
and
yeah.
B
That's
mostly
it
how
we
flush
disk
rights,
because
disk
writes
are
stored
as
dirty
pages
in
page
cache,
and
they
have
a
time
on
how
long
they
can
exist
in
page
cash
before
that
to
be
synced
out
to
the
desk,
and
so
here
they're
saying
the
ratio
of
dirty
pages
needs
to
be
80
right,
which
is
like
a
lot
of
cash
disk
rights,
yeah,
and
then
we
will
expire,
those
dirty
pages
after
500
cents
a
second,
so
they
can
actually
persist
in
memory
for
five
seconds
before
they're
eligible
to
be
written,
which
in
computer
time
was
like
really
long
long
time,
yeah,
yeah
and
then
dirty
right
back.
B
I
can't
remember
what
right
back
is,
but
if
we
went
into
the
vm
text
file
for
the
kernel
dock,
I
bet
it's
in
there
and
we
can
read
what
it
is
so
again.
They're
like
trying
to
keep
data
in
memory.
B
Oh,
the
well
there's
two.
B
C
B
B
Right
and
so
there's
a
lot
a
lot,
a
lot
of
tuning
guides.
Don't
do
things
like
this
and,
like
you
want
to
be
really
wary
of
those
because
clearly
they're
not
saying
if
you
have
this
much
memory,
you
need
to
set
these
value
to
this.
If
you
have
this
much
more
memory,
you
need
to
be
setting
it
to
something
different,
so
they're,
not
accounting
for
the
actual
memory
you
have
on
systems.
B
B
Maybe
maybe
can
you
do
it
better
than
me?
That's
this
is
the
other
important
thing,
so
my
team
is
hiring
at
red
hat
and
we're
hiring.
A
A
B
A
A
Yeah,
oh
speaking,
of
summit
yeah,
please
you
know.
Summit
is
gonna,
be
an
awesome
like
three-part
event
this
year
and
we're
hoping.
The
third
part
is
somewhat
physical,
so
check
out
the
the
summit
page
or
just
drop
a
link
in
chat.
If
you're
interested
sign
up
to
attend.
You
know
we'd
greatly
appreciate
it.
If
you
did
but
yeah
summit
is
our
big
annual
event
and
we're
kind
of
splitting
it
up
for
2021
to
make
it
a
little
bit
more
consumable.
A
We
don't
want
to
do
the
full
like
week
on
experience
virtually
because
we
know
that
people
are
fatigued
from
everything
is
virtual
right,
like
oh
I'm
meeting
with
my
family
on
zoom
later
great,
that
kind
of
stuff.
B
Yeah,
I
actually
have
a
dod
group
that
I
play
with
and
I
I
had
to
be
like
guys.
This
is
too
much
like
work
now
like.
I
just
can't
right,
I'm
in
when
we
go
back
to
in
person,
but
I.
A
Just
can't
right
now
yeah,
it's
I've
gotten
to
the
point
where
it's
like
yeah,
it's
cool
to
do
like
a
group
call
every
once
in
a
while,
but
I'm
really
not
looking
forward
to
sitting
in
this
chair,
24
7
right.
That's
not
the
goal
of
me
working
right
like
I
want
to
use
zoom
for
work
and
very
little
else.
You.
A
A
B
Indeed,
thanks
everybody
and
see
you
in
a
couple
weeks.
A
It's
already
happened
here,
so
we're
in
that
wonky,
the
rest
of
the
world
is
turning
over
yeah
all
right.
So
that's
all
for
the
channel
today
tune
in
tomorrow
morning.
First
thing:
we're
gonna,
be
talking
about
storage,
talking
about
live
migrations
from
openshift
container
storage
three
to
four
and
do
that
on
the
fly
we'll
see
how
well
that
goes
in.