►
Description
A show that features the people and technology that make Red Hat® Enterprise Linux® into the world’s leading enterprise Linux platform.
A
A
Well,
it's
snowed
overnight
here,
so
I'm
still
waiting
for
spring.
It's.
A
A
A
B
So
we
thought
maybe
we'd
revisit
that
and
talk
about
how
that
kind
of
ties
together
with
tundee,
which
is
a
damon
for
managing
that.
So
we,
how
much
do
you
know
about
tunde
chris?
Besides
loving
it
clearly.
A
I
mean
so,
I
know
it
has
a
series
of
profiles,
and
I
know
those
profiles
are
very
well
constructed
for
their
use
cases,
but
sometimes
they're
named
confusingly
right,
like
if
I'm
optimizing
for
network
throughput.
What
does
that
do
to
io
right,
like
there's
some
missing
explanation
that
you
kind
of
have
to
always
figure
out?
What
is
this
workload
kind
of
like
before
you
set
up
tune
d?
To
do
something
you
can
have
tunde
record?
I
understand
right
like
just
kind
of
get
some
data
and
try
and
figure
itself
out.
I
feel
like.
B
Well,
we
we
do
a
recommendation.
Yeah.
B
Choose
or
recommend
which
profile
things
works
the
best
for
your
workload,
but
yeah
you're
right
an
effect
like
so
we
initially
included
2d
with
row
five.
So
it's.
B
Yeah,
but
as
we've
kind
of
built
up
the
profiles,
one
of
the
things
that
got
added
to
it
was
the
ability
to
base
profiles
on
other
profiles.
B
That
includes
throughput
performance
right,
but
so
you
end
up
with
kind
of
like
this
stacked
pile
of
things
that
happen
to
your
system
and
being
able
to
detangle
that
a
little
bit
might
be
interesting.
A
Yeah
and
so
there's
there's
actually
a
like
I'm
looking
at
my
fedora
33
host
right
now
that
I'm
using
in
power
save
mode
on
a
server,
because
it's
just
me
using
it,
I
don't
need
the
highest
of
bandwidth
and
capacity
and
everything
else
going
on.
It
runs
openshift,
just
fine
and
when
I
log
into
it
it
does
my
workloads
just
fine
too.
So
you
know
I've
set
up
a
power
save
profile,
but
somebody.
B
A
Is
going
to
be
like?
No?
No!
No!
No
don't
do
that.
You
need.
You
know
something
something
something
you
know
whatever
for
you
know,
because
it's
a
virtual
host
for
example,
or
you
know,
because
it
is
running
open
shift,
it
needs
to
be
more
latency
performant
or
something
to
that
effect.
So,
there's
a
lot
of
different
profiles,
a
lot
of
different
ways
to
apply
it.
A
B
B
B
Or
better
yeah,
okay,
so
the
first
thing.
B
And
so,
for
example,
I
was
just
monkeying
around
with
one
for
microsoft:
sql
server.
B
B
There's
that
new
one
that
I
got
and
the
reason
I
installed
this
one-
is
one
to
show
that
you
can
sometimes
get
2d
profiles
from
other
places,
but
also
we're
going
to
take
a
look
at
this
one.
And
I
know
that
for
rel,
8.4
there's
actually
an
update
to
this
one.
So.
B
But
so,
if
we
take
a
look
at
the
rpm
payload.
B
So
essentially,
we
got
a
directory
one
configuration
file
and
a
man
page
for
it.
Okay,
so.
A
And
folks,
just
to
let
you
know
right,
like
there's,
there's
profiles
for
postgres
oracle,
sap
hana
trying
to
see
if
there's
any
other
database
ones.
I
see
here,
obviously
a
mess
sql
but
yeah
like
the
and
you
can
get
more
from
other
places
right
like
if
your
database
vendor
has
made
a
tuned
profile.
You
can
install
that
too
and
that's
something.
B
I
think
like
isvs,
don't
completely
buy
on
to
is
that
you
can
actually
package
not
just
like
your
isv
software
for
red
hat,
but
you
can
bundle
in
these
kind
of
rel
or
red
hat
distro
specific
things
to
to
help
optimize
the
performance
of
your
app
on
it
anyway.
So
so
this
is
the
2d
configuration.
You
can
see
that
this
one
in
the
main
has
an
of
throughput
performance,
which
a
lot
of
our
a
lot
of
r2ds
will
start
with.
B
So
if
throughput
performance
has
a
setting
that
conflicts
with
one
here
in
the
ms
sql
2d,
because
the
nsql
2d
was
the
one
that
was
executed
last,
it
pulled
in
all
the
tune
through
performance
and
on
top
of
that,
put
on
the
things
from
the
ms
sequel
2d.
So
whatever
happened
in
the
ms
sql
2d
that
that's,
what
actually
is
persistent
on
the
system?
B
A
lot
of
these
like
down
here,
excuse
me
down
here
under
the
assist
control
section.
These
are
those
proxies
tables
that
we
talked
about
a
couple
weeks
ago
from
actually
maybe
a
month
ago
now,
which
are
not
persistent,
so
the
other
part
of
tunde
is
there's
a
daemon,
that's
run
by
run
by
systemd.
B
That
will
start
at
boot
and
then
looks
at
the
configured
profile
and
applies
it
at
boot
time
so
that,
even
though
we
may
have
made
these
changes
live
and
those
changes
are
put
into
proxies,
which
is
not
persistent,
we're
at
least
making
those
same
changes
every
time
the
system
is
booted.
B
All
right,
so
you
mentioned
that
there
were
a
whole
bunch
and
we
saw
that
you
know
there's
the
list
when
we
did
the
10d
edm
list
and
those
are
all
just
stored
in
the
username
d.
B
There's
a
slash
there
directory
right,
so
this
is
where
you
put
stuff
when
you
make
your
own
2ds
and
you'll,
see
that
this
corresponds
with
the
list
that
that
we
got
from
the
2d
adm
command.
B
A
B
All
right,
so
balance
is
just
kind
of
like
a
generic
2d
profile,
and
so
you
can
see
things
like
the
governor.
The
cpu
governor
is
set
to
conservative
or
power
save
so
we're
not
cranking
the
cpu
all
the
way
up
to
full
power
mode.
B
That's
pretty
much
it
yeah,
so
I
think
for
most
folks,
they're
interested
in
ones
they're,
probably
based
on
proof
of
performance,
but
before
we
get
to
that
because
that's
going
to
cover
a
whole
bunch
of
layers,
oh
yeah.
I
also
think
these
two
are
interesting:
virtual
host
and
virtual
guest.
B
All
right
so
host
is
included
or
includes
the
throughput
performance,
and
then
it
updates
the
dirty
page,
cache
ratio.
So
when
we
have
five
percent
of
the
system
is
dirty
page
cache,
it
kicks
off
a
kernel
threat
to
start
writing
that
out
to
disk
this
over
here
at
the
kernel,
sched
migration
cost.
So
if
you.
A
B
A
process
from
one
cpu
to
another,
you
lose
all
the
cash
that
was.
A
B
The
cpu
that
you're
running
on
before
right
and
for
things
like
virtual
machine
workloads,
that's
a
big
deal,
yeah
like
losing
all
that
cash
and
having
to
restart
it
on
another
cpu
was
difficult,
so
they
schedule
this
or
create
this
cost.
That
basically
says
hey
here's,
how
expensive
it
is
to
move
to
another
another
cpu.
So
we
try
to
keep
things
on
the
cpu
that
they're
originally
running
on
nice.
B
B
You
go
remember
we
we
like
make
a
recommendation.
It
turns
out
that
that
recommendation
is
applied
by
default
on
boxes,
where
you
haven't
done
anything
at
least
on
across
seven
and
eight.
So
this
virtual
machine
got
the
virtual
guest
profile.
I
applied
to
it
all
right
so
again,
including
through
performance,
so
we
get
whatever
is
in
there
and
then
we're
setting
it
up
so
that
we
keep
more
dirty
pages
in
page
cache
than
the
default
right
we
set
swappiness
to
30..
Swappiness
is
value
between
0
and
100
yeah.
B
A
B
And
in
reality,
like
a
lot
of
places,
let's
say
you
were
a
cloud
provider
instance
cloud.
Modern
systems
don't
have
swap
space
nope,
so
you
could
set
that
to
one
and
and
probably
be
fine,
although
if
you
don't
have
soft
space,
there's
really
nothing
for
it
to
swap
anyway.
So
right,
I
actually
would
probably
add
something
to
here.
B
B
B
B
B
B
B
There
you
go
yeah,
so
the
default
is
deadline.
Oh
actually,
it
looks
like
I
need
to
change
my
tunable
because
there's
no
longer
schedule
called
no
op.
It
is
called
none
right
with
rail,
eight,
and
so
the
tunables
that
are
in
this
this
q
directory
are
ones
that
are
germane
for
the
ios
scheduler.
Well,
actually,
the
ones
that
are
down
here
in
io
scad
are
the
ones
that
are
germane
for
that
scheduler.
B
The
other
ones
are
just
generic
for
the
disk
device,
so
things
like
read
ahead
so
after
you've
retrieved
a
piece
of
data
from
the
disk,
how
much
additional
subsequent
data
should
you
read?
Because
if
you're
dealing
with
file,
I
o
you
grow
and
grab
a
block
of
a
file.
B
What's
the
likelihood
that
somewhere,
close
in
the
ioq
is
more
blocks
of
that
same
file,
so
we
can
do
things
like
set
the
read
ahead
to
slurp
in
more
data,
expecting
that
we'll
be
asked
for
that
data
shortly
right.
Let
me
go
back
in
here.
B
B
A
B
Me
all
right,
so
let's
say
that
we
kept
the
default
of
read
hack
right.
So
we
have
these
that
the
disk
ioq
and
I
requests-
are
coming
in
they're
being
put
in
the
queue
and
then
we're
spending
our
time
in
the
kernel
going.
Oh
wait!
This
one
is
close
to
that
one.
Oh,
this
one's
about
to
expire,
and
it's
doing
all
this
reordering
on
the
queue.
B
But
then,
where
does
that
go
from
the
virtual
machine
to
the
hypervisor
right?
It
turns
out.
The
hypervisor
also
has
a
scheduler
yeah,
that's
doing
the
same
exact
stuff,
but
now
across
all
the
machines
in
aggregate,
and
so
you
spent
all
this
time
on
a
virtual
machine
making
this
order
as
best
as
you
could,
but
when
it
goes
to
the
hypervisor,
it
just
gets
reordered
anyway.
B
So
why
waste
your
time
doing
the
the
disc
I
o
ordering
on
the
virtual
machine,
which
is
going
to
be
reordered
as
soon
as
it
gets
to
the
hypervisor?
B
So
that's
why
we
we
set
it
to
none
and
it
may
not
be
the
best
for
every
workload.
It
also
depends
on
like
what
all
the
virtual
machines
are
doing
and
what
the
hypervisor
is
doing.
There's
a
lot
of
things
in
play
there.
So
whenever
you
make
changes,
you
always
want
to
kind
of
not
just
go
yeah.
It
looks
good,
it's
like
no
we're
gonna
put
the
change
in
and
then
we're
gonna.
Try
it
out
yeah
and
we're
gonna
measure
some
stuff
yeah.
B
B
So
you
were
asking
about
throughput
performance,
yeah
yeah,
I'm
curious.
A
Why
that
is
underneath
guest.
B
A
Oh
really,
okay,
yeah
rapscallion
reeves
asks.
Would
that
also
affect
any.
B
A
Shares
or
vfio.
B
Elevator,
maybe
maybe
vf
well,
so
maybe
it's
whatever
networks
you
have
in
block
right.
So
if
your
network
shares
don't
show
up
as
block
devices
and
they
slash
cis
slash
block
directory,
no
things
like
iscsi
devices
would
show
up
as
disks,
so
it
could
potentially
affect
those,
but
there's
also
a
regular
expression
that
you
can
put
in
that
disk
subsection
of
your
tunde
that
identifies
which
disks
it
should
apply
to.
B
So,
if
you
are
in
a
state
where
you
have
iscsi
devices-
and
you
don't
want
this
to
apply
to
iscsi
devices,
you
can
use
this
regular
expression
syntax
to
to
make
changes
so
that
it
avoids
those
iscsi
disks.
In
short,
in
our
episode
guide,
I
put
in
a
couple
of
links
that
might
be
interesting.
B
There
are
two
links
to
the
same
knowledge
or
to
the
same
red
hat
product
documentation.
It's
called
something
like
the
rel
system,
monitoring
and
performance.
A
B
Chapter
three
talks
about
making
changes
to
2d
profiles
and
in
there
is
like
okay,
here's
how
you
can
make
changes
to
disk
here's,
how
you
can
here's
past
the
elevator,
here's
how
you
can
pass
the
regular
expression
you
can
also
pass
other
tunables
like
we
could
actually
set
the
read
ahead
in
that
disk
section
that
would
apply
to,
in
my
case,
all
disks,
because
I
didn't
limited
it,
but
if
you're
regex
limiting
it,
you
can
tell
it
what
what
read
head
you
like
to
apply
to
that
set
of
discs
that
are
matching
your
regex
expression.
B
Yes,
so
nfs
devices
typically
don't
show
up
as
disk
block
devices,
so
it
naturally
wouldn't
wouldn't
affect
those
because
they
don't
show
up,
as
the
device
type
insists.
A
B
This
reasons
we
work
in
different
teams.
I
know
I
know
okay,
but
I
still
love
hikra
short.
Thank
you.
I
appreciate
it
cool,
so
you
should
have
that
now.
I
do
thank
you,
okay,
awesome
and
so
we're
talking
about
throughput
performance.
B
So
we
see
a
couple
things
we're
starting
to
introduce
the
idea
of
architecture-specific,
tunables
and
so
up
here.
This
is
for
marvel
perm
processors.
A
B
But
we'll
we'll
get
more
there,
here's
a
an
amd
one
and
then
there's
actually
settings
later.
In
fact,
you
see
right
here
at
the
vm
thunder
x
then
has
a
regex.
That
says
only
do
this
on
arm,
but
only
if
you
match
that
thunder
x,
cpu
regex,
so
it
can
get
fairly
complicated
anyway.
So
a
couple
things
in
the
throughput
performance
one
is
we
set
the
cpu
governor
to
performance,
meaning
crank
that
up
to
11.
B
A
Rock
that
thing
yeah
right
so
yes,.
B
So
we
set
the
read
ahead
so
when
we
say
virtual
guest
and
virtual
guest
includes
throughput
performance,
cranks
up
the
cpu
governor,
although
on
a
vm
that
really
doesn't
matter,
but
it
does
set
the
read
ahead
to
4096
or
greater
on
disks
attached
to
the
vm
and
then
down
under
syscontrol.
So
we
set
the
schedule
granularity.
So
every
what
one
million.
B
Yeah
it'll
check
to
see
whether
we
need
to
change
what
the
scheduled
job
is
on
the
cpu
right
and
then
down
here.
We
will
check
this
many
nanoseconds,
whether
we
should
make
that
decision
or
not,
and
then
we
set
the
dirty
page
ratio
higher
than
normal,
and
you
may
remember
that
in
the
virtual
guest
toggle
that
back
down
to
30.
A
B
B
B
That
that
is
somewhat
newish.
Okay,.
A
B
A
B
A
B
Is
done
last
time
so
tundy
starts
up
and
if
you
include
a
profile
on
your
tundee,
the
included
profile
is
executed.
First,
okay
and
then
your
profile
is
executed
on
top
of
that.
So
in
the
case
of
virtual
guest,
your
performance
is
executed
first
and
then
virtual
guest,
and
so
that's
why
you're
able
to
take
like
the
swappiness
from
proof
of
performance
and
override
it
with
a
greater
amount
of
swap
space
or
affinity
for
smaller
space
with
virtual
guests.
B
Right
so
I
think
that
to
to
make
things
simple
yeah
I
like.
A
B
Tundy
works
with
both
the
assist
control,
tunables
and
the
slash
cis
tunables,
whereas
syscontrol
only
works
with
the
proc
stuff
got
it.
So
I
would
say
that
tundy
is
the
better
way
of
managing
performance
settings,
because
you're
able
to
manage
across
multiple
right.
B
Settings
yeah
exactly
and
then
you
obsolete
the
need
to
know
about
which
one
happens
in
which
order,
but
essentially,
if
syscontrol
is
executed
in
the
olden
days,
that
was
done
through
an
rc
script.
B
A
I'm
kidding
is
tundy
rail
specific
or
will
it
work
for
other
distros.
B
It
could
work
on
other
distros.
I
know
the
red
hat
distros
all
have
it
so
fedora
has
it
rel?
Has
it
centos
stream
and
linux?
Have
it
and
then,
of
course,
the
downstream
distros
of
those
would
also
have
it
so
oracle
linux
and
the
newer
downstream
districts
they're
starting
up
like
oma
they'll,
still
inherit
it
all
right.
So,
okay,
here
we
go.
This,
I
think,
is
that
the
sauce
right
here
so
the
tunde
service
is
started
after
systemd
ctl
service
so
says,
ctl
will
start
up.
B
A
B
B
Well
in
the
proxis
and
slash
syst,
that's
curl
stuff
right,
so
other
distro
uses
the
same
linux
kernel,
so
potentially
they
could
use
tv
as
well
if
they
wanted
to
put
in
their
distributor
all
right.
So
I
mentioned
that.
Oh
that
we
have
that
ms
sql
one.
B
So
here's
what's
in
the
anesthesia
one
today,
so
it
includes
performance.
Basically,
it
looks
at
if
you're,
using
huge
page
memory
but
through
the
running
of
things,
you're
not
using
a
consistent
memory
space
in
the
huge
page,
it'll
actually
copy
it
to
a
different,
huge
page
to
congregate.
It
better
and
then
down
here
we're
setting
the
memory
map
like
how
big
the
virtual
memory
map
should
be.
We
do
not
use
pneumo
balancing
that's
going
to
keep
us
from
moving
things
between
cpus
as
much
and
then
down
here.
B
We're
doing
some
kernel,
scheduling,
stuff
right,
so
we're
determining
the
scheduled
latency,
the
granularity
of
how
often
we
need
to
wake
up
and
check
things
and
how
often
we
need
to
make
changes
to
the
kernel
scheduler
all
right.
So
let
me
edit
this.
B
B
A
Have
it
available
dbm
being
one
of
them,
so
that
means
all
the
other
underlying
dbm
based
ones,
are
probably
going
to
have
it
available
as
well,
which
is
cool.
B
All
right,
so
this
is
the
new
stuff
that's
going
to
be
in
there,
so
we
start
off
at
the
same
starting
point:
we're
going
to
start
with
performance,
we're
making
a
slight
cpu
change
for
whatever
force
latency
is.
That's
probably
slash
says
I
would
imagine
yeah,
but
you
could
see
that
whoops
well,
let's
go
too
far,
there's
a
whole
bunch,
more
cyst
control
stuff
happening
in
the
updated
profile
that
will
drop
with
8.4.
B
So
this
is
currently
in
the
8.4
beta,
so
we're
changing
the
swappiness
so
before
we're
just
taking
that
what
swappiness
is
40,
that
was
a
throughput
performance
and
now
we're
like.
Oh
wait.
No,
we
should.
We
should
turn
that
down
for
databases,
because
that's
absolutely
for
database.
B
We're
also
adding
in
a
whole
bunch
of
page
cache
information
like
how
often
we
should
write
it,
which,
in
the
earlier
tunde
profile,
didn't
exist.
So
we
talked
about
this
a
little
bit
a
while
ago
about
how
these
the
set
of
background
ratio,
dirty
ratio
expiring
right
back
sentence
x,
that
this
is
essentially
an
effort
to
keep
as
much
unwritten
file
rights
in
memory
longer,
so
that
we're
not
doing
a
whole
bunch
of
disk
io
on
this
machine.
B
Right
so
the
other
thing
is
we
set
transparent,
huge
pages.
To
always,
I
don't
remember
what
the
default
is.
We
could
check
on
this
vm,
but
now
we
are
going
to
always
use
transparent,
huge
pages
in
rel,
seven
and
round
six
a
lot
of
times.
B
We
defaulted
to
not
using
transparent,
huge
pages
and
that's
starting
to
break
loose
in
round
eight,
where
we're
just
better
at
allocating
them
and
not
delaying
memory,
io
or
memory
requests,
because
behind
the
scenes
we're
trying
to
allocate
this
huge
page
to
handbag,
so
apparently
they've
done
some
testing
at
between
microsoft
and
red
hat
and
figured
out
that
they
can
get
away
with
this
and
actually
improve
performance.
Nice
we're
increasing
the
memory
map
size
so
now
for
every
process.
We
have
more
available
map
space.
B
So
here,
what
we're
doing
is
we're
changing
the
read
and
write
memory
settings
for
network
I
o,
so
the
default
is
how
much
a
memory
buffer
should
be
in.
I
believe,
bytes
of
ram.
B
B
So
the
web
app
then
makes
a
connection
to
the
database
through
the
network
to
try
and
retrieve
that
data
at
once.
So
we're
buffing
the
the
network
buffer
sizes
to
be
able
to
shove,
more
data
or
read
more
data
than
when
we
receive
those
connections
all
right
and
then
pretty
much
everything
down
here
was
already
in
the
old
2d
profile.
B
B
They
added
it
and
it
looks
like
the
migration
cost
is
actually
lower
than
it
was
previously
through
throughput
performance.
So
we're
saying
it's
not
that
expensive.
If
we
need
to
take
a
database
thread
and
move
it
to
another
another
cpu,
my
guess
would
be
that
it's
because
the
I
the
database
requests
that
come
in
are
not
contiguous
parts
of
the
red
end
tablespace
right.
So
there's
not
a
lot
of
expense
if
we
switch
it
to
another
cpu
because
we're
not
losing
the
cash
there's,
not
a
lot
of
cash.
A
B
Yeah
I'll
check
my
relate
physical.
B
Okay,
so
my
like
nothing,
special
rel83
laptop
is
running
balanced
by
default
and
if
you
look
at
the
balance
profile,
it
has
stuff
on
like
audio
settings.
So
I
think
it's
more
for
like
a
yeah.
B
B
Yeah
and
if
you
look
at
what
it
is
doing
here,
it's
actually
it's
called
network
latency,
but
it's
really
network
low
latency.
So
it
includes
latency
performance
which
is
low,
latency
performance
profile.
B
We
turn
off
huge
pages.
So
if
something
asks
for
a
huge
amount
of
memory
instead
of
trying
to
allocate
a
huge
page
and
pass
that
off
into
its
memory
map
space,
we
just
give
it
individual
memory
addresses
because
there's
time
that's
taken
for
those
to
resolve
those
huge
page
allocations
and
then
down
here,
it's
looking
at
like
how
often
we
should
check
the
network
stack
for
its
status
and
then
they
also
added
the
tcp
fast
open.
B
So
try
to
remember.
I
remember
looking
at
this
at
some
point
and
something
like
we
shortcut
some
of
the
tcp
connection
stuff,
like
the
validation
stuff
to
just
like,
have
something
going,
or
maybe
we
keep
a
pool
of
connections
or
something
weird.
It
was
something
weird
that
basically
made
receiving
or
sending
tcp
packets
slightly
faster.
B
A
A
B
Interesting
this
one
specifically
is
for
yeah,
which
is
like
no
longer
a
thing
for
a
red
hat.
B
Yeah,
in
fact,
I
think
2d
adm.
B
B
All
right
that
tells
you
what
the
recommendation
would
be
but
doesn't
actually
set
it
right.
It's
not
until
you
do
a
tuned,
edm
profile
and
then
assign
the
profile.
So
I
can
do
something
like
that.
Exactly.
A
So
there's
logic
in
there
to
tell
you
right
like,
but
you
can
also
change
this
for
your
environment
and
have
something
like
ansible
come
around
and
plop
it
down
on
the
file
system
and
say
you
know,
you
know
you
have
a
specific
kind
of
hardware
that
needs
a
specific
kind
of
profile.
You
know
it's
network,
latency
versus
you
know
latency
performance
or
well
those
two
inherit
each
other,
but
you
know
you
get
what
I'm
saying
right
like
so.
B
I'd
go
even
a
step
further
right,
say.
Remember
like
two
or
three
months
ago
we
talked
about
system
rules.
A
B
So
one
of
the
system
rules
is
called
kernel,
settings
and
kernel
settings
lets
you
pass
parameters
and
what
it's
really
doing
behind
the
scenes
is
looking
at
the
running
tundy
profile
and
taking
those
settings
that
you
put
in
your
hands-on
playbook
and
sticking
them
in
the
currently
executing
2d
profile.
Interesting.
So.
B
B
Your
own,
your
own
ansible
playbook,
to
do
all
this
anymore.
You
can
use
the
kernel
settings
system
or
when
it
actually
uses
tunde
for
the
storage
method,
for
keeping
those
changes.
Persistent
across
your
population.
A
If
I
can
find
that
video
before
we
get
off
there
here,
probably
not
well,
it
was
just
system
world
right
like
that,
was
actually
in
or
just
search
for
roles.
A
I
have
to
spell
roles
right.
First,
though
yeah
I
found
it
here,
it
is,
it's
got
10
likes
god,
good
for
you.
A
Well,
you
know
wherever
you're
watching,
please
click
like
and
subscribe.
If
you
can
but
yeah
there's
the
tune.
There's
the
system
rolls
episode
for
those
that
are
curious.
A
A
Don't
either
I'm
curious
what
the
audience
thinks
right
like?
How
often
are
you
changing
sys
cuddle
settings?
Would
it
make
sense
to
make
a
system
roll
out
of
that
right
like
or
would
it
make
sense
to
make
a
you
know
profile
or
where,
where,
where
is
that
line
between
system
role
and
just
another
2d
profile,
I
guess
might
be
a
good
question.
B
Yeah,
so
I
think
that
there's
some
thought
to
be
had
there,
so
I
like
I
like
the
idea
of
using
system
rules,
because
I
can
make
changes
to
my
playbook
and
then
execute
it
across
my
population
right.
So
if
it
turns
out
that
you
know
I'm
reading
a
tuning
guide-
and
it
says,
oh,
we
should
go
from
swappiness
of
one
swappiness
of
five
right.
I
can
go
to
my
playbook
and
make
that
change
and
then
execute
the
playbook
and
the
system
role
will
then
execute
it
across
the
population.
B
B
A
So,
okay,
we
might
go
on
a
spelunking
mission
here.
Rapscallion
reads:
asks:
are
there
any
specifics
to
tuning
a
container
host
as
opposed
to
a
vm
host?
Now
that
is
where
it's
like.
Let's
go,
look
at
archos
see
what
it's
doing.
B
So
I
have
a
feeling.
The
answer
is
not
really
probably
not
yeah,
because,
unlike
the
virtual
guests,
where
they
have
their
own
ios
subsystem
and
a
variety
of
other
things
in
a
container
host,
you're
sharing
the
same
kernel
as
the
host
operating
system.
So
number
one.
You
don't
get
to
change
any
of
those
things
because
in
the
container
you're
not
doing
any
of
those
things.
B
B
If
your
containers
are
doing
a
bunch
of
network
protocol
stuff
right
because
it's
all
web
services,
maybe
you
would
on
the
host,
make
some
changes
to
the
tcp
stack
or
the
overall
network
stack
to
make
it
larger
amounts
of
data
that
could
push
out.
I
think
there
what
you're,
probably
more
interested
in
on
the
container
host,
is
less
tundy
and
more
c-groups.
A
B
A
Things
happening,
rapscallionary
is
the
place
to
go.
Look
to
find.
This
is
actually
in
the
openshift
container
platform
release
notes.
So
in
4
6
we
did
a
partial
tune,
d,
real
time
profile
and
then
now
in
force
7,
it
looks
like
we've
got
some
logic.
If
there's
an
invalid
tune
d
profile
that
gets
created,
the
openshift
tune,
d
supervisor
process
may
ignore
profile
updates
and
fail
to
apply
the
updated
profile.
This
bug
fix,
keeps
state
information
about
tunde
profile,
application,
success
or
failure.
A
So
now
open
shift
tune
d,
which
is
a
process
recovers
from
the
profile
application
failures
on
receiving
new
valid
profiles,
so.
A
With
tune
d
between
versions-
and
we
fixed
it
and
there's
a
bz
linked
in
the
four
seven
notes
here
so
yeah,
there's
there's
definitely
tuning
happening
in
the
openshift
world
for
sure,
which
means
that's
happening
on
the
archos
notes
themselves,
which
are
part
of
openshift.
B
And
I
have
a
feeling
that,
like
the
like,
I'm.
A
B
Profile,
it's
like
keeping
keeping
dirty
pages
of
memory,
so
you're
not
burning
a
bunch
of
cycles,
writing
stuff
out
disk
and
then
also
making
sure
that
you
don't
move
schedulers.
A
B
If
sorry,
not
schedulers
move
cpu,
because
if
your
container
was
executing
one
cpu
and
added
all
the
cache
information
there
moving
to
another
cpu
would
be
not
great
for
it,
because
you'll
then
have
to
spin
up
a
whole
bunch.
More
cached
information.
A
I'm
trying
I'm
such
stuns.
I
was
trying
to
do
an
ocd
bug
to
figure
out
where
it
is
in
the
file
system,
but
give
me
a
second,
but
I
don't
want
to
leave
you
hanging
either
scott.
So
oh.
B
It's
cool
one.
Other
thing
you
might
take
a
look
at
is
the
at
least
for
rel
is
bcc
tools,
so
bcc
tools
uses
the
ebpf
in
kernel
virtual
machine
to
slurp
out
real-time
data
from
the
running
kernel,
and
we
have
things
like
cache
hit
and
mis-reporting
data,
so
you
can
kind
of
see
what's
happening
at
the
app
layer
while
your
applications
are
running
that
might
be
useful,
especially
in
a
place
where
you
have
multi-tenancy
for
different
container
applications
running.
A
Yeah
wow,
my
oc
foo
was
failing
me
today,
but
yes,
you're,
absolutely.
A
So
if
you're
super
curious
go
grab,
one
of
your
open
shift,
nodes
and
debug
it
real,
quick
and
you'll
find
it
it'll
be
in
a
similar
location,
like
user
lib
tune
d.
I
just
can't
get
into
these
nodes
right
now,
oh
because
they're
in
a
not
so
fun
state.
Oh
my
node's,
upgrading
duh!
That's
why
it's
not
working.
A
B
Short,
I
did
throw
in
one
more
link
in
our
document.
Okay,
so
this
is
one
of
the
lab.redhat.com
labs
it
it
masquerades
as
a
sql
server
lab.
In
fact,
its
name
is
sql
server
c
store,
and
it
does
two
things
one
is.
B
It
shows
you
that,
when
you're,
using
microsoft,
sql
server
or
really
a
lot
of
different
databases
using
column
stores
will
save
you
effort
when
you're
doing
queries,
but
the
other
thing
that
it
does-
and
this
is
the
more
like
rel
interesting
thing-
is:
it
starts
off
by
running
a
cpudist
bcc
tool
against
the
ms
sql
query
process.
B
A
B
That's
a
big
thing,
but
the
histogram
that
it
produces
shows
you
like,
where,
in
the
context,
switching
like
how
long
jobs
are
running
on
the
cpu
right
and
all
of
them
are
kind
of
right
at
this
one.
B
Plateau
or
a
lot
of
them
are
at
this
one
segment
of
time
and
then
the
lab
has
you
apply
the
m
sql
2d
and
one
of
the
things
that
msc
platoon
does
is
change
the
scheduling
granularity
and
then
you
rerun
the
same
query
with
cpudist
and
what
you'll
notice
is
that
all
of
a
sudden,
instead
of
everything
being
in
a
line
right
at
this
one
time
you
get
this
much
nicer
gradation
of
context,
switching
because
we're
checking
more
frequently
and
when
a
query
resolves
quickly.
A
B
A
query
takes
a
little
bit
longer,
it
might
be
like
two
or
three
checks
and
then
we
can
switch
it
and
so
that
that's
a
real
life
example.
A
A
Very
cool
all
right:
well,
I
mean
we're
14.57
man,
we
we
still
talked
an
hour.
We
did
it.
Yes,
yay
mission
accomplished,
hang
up
a
banner.
The
yeah,
my
yeah
cluster,
failed
to
upgrade
all
right,
that'll
be
fun
to
debug
later
as
if
they
didn't
have
enough
to
do.
Chris.
B
B
A
guest
every
couple
of
weeks
for
every
episode
and
I
think
that
we're
going
to
go
to
a
format
where
we're
doing
a
guest,
every
other
episode.
So
next
episode
I'll,
have
to
find
a
guest
to
talk
to
us,
which
usually
is
not
a
problem.
But
then,
after
that,
what
do
we
say?
We're
going
to
talk
about
file
system,
stuff.
B
And
then
I
think
that
might
parlay
into
just
like
a
brief
like
introduction
to
se.
Linux.
A
B
Yeah,
so
what
we're
going
to
do,
or
we're
going
to
try
for
the
next
while
is
we'll,
have
a
guest
one
episode
and
then
we'll
do
something
like
this
for
free
form
digging
down
in
the
os.
For
the
next
episode,
then
we'll
go
back
to
a
guest
and
then
we'll
do
free-form
excavation
in
the
os
yeah.
B
A
A
I
got
I
got
whatever
I
want
my
basement,
but
anyways
great
show
like
learned
a
lot,
the
inheritance,
all
the
last
applied
stuff.
Somebody
wants
an
ansible
workshop,
I
mean
we
could
do
like
an
intro
to
ansible
on
rail
kind
of
thing.
If
you
want.
B
Yeah
that'd
be
fun.
I
could
ask
sean
to
come
over
and
be
our
guest.
A
Yeah,
I
could
ask
too
I
used
to
work
with
him.
You
know,
but
yeah
the
like
the
ansible
channel
does
do
stuff
on
a
regular
basis
too.
So,
if
you
haven't
seen
the
ansible
youtube
go,
find
it
it's.
It's
they're
doing
some
stuff
new
stuff
with
it
lately
so
yeah
but
wraps
getting
reused.
If
you
want
some
ansible
content,
we
can
make
that
happen
for
sure
so
yeah,
that's
it
for
the
streaming
on
the
air.
Today,
tomorrow
we
start
bright
and
early
zero.
Nine
hundred
a.m.
A
Eastern
with,
when
we
talk
about
storage,
but
what
happens
with
failed
migrations?
What
go?
What
do
you
do
when
bump
in
the
night
does
happen
during
your
massive
migration
of
workloads
or
storage?
In
this
case,
so
yeah
that'll.
B
A
Interesting,
a
little
adventure
that
eric
nelson
michelle
obama
and
I
will
be
on
first
thing
in
the
morning
tomorrow
and
it's
a
pretty
full
day,
so
you
know
check
out
the
calendar
when
you
get
a
chance
sign
up
for
red
hat
summit.
It's
very
free.
I
highly
encourage
you
to
go.
Take
advantage
of
that
and
yeah
have
a
great
day
out
there.
Everybody-
and
if
I
don't
see
you
the
rest
of
the
week,
have
a
great
week.