►
From YouTube: Red Hat Enterprise Linux Presents (E04): Performance
Description
A show that features the people and technology that make Red Hat Enterprise Linux into the the world’s leading enterprise Linux platform.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
healing
from
welcome
to
another
episode
of
red
hat
red
hat
enterprise,
linux
presents
we're
talking
about
performance
today,
I'm
chris
short
principal
technical
marketing
manager.
Here
at
red
hat,
I
am
also
the
executive
producer
of
this
thing.
We
call
open
shift
tv,
I'm
very
happy
to
have
you
here.
Everyone
that's
joined.
So
let's
talk
about
some
rail
performance,
scott.
Take
it
away!
Buddy,
hey
thanks!
Chris.
B
So
we've
we've
been
off
the
air
for
yeah.
It
seems
like
forever
because
you
guys
did
kubecon.
B
C
C
I've
worked
frontline
support.
I've
worked
in
training
for
training
our
support
organization,
up
specifically
around
the
row
5
time
frame,
and
then
I
spent
just
over
eight
years
being
a
tam
which
is
a
technical
account
manager
where
I
would
be
the
main
go-to
person
for
some
of
our
largest
customers.
So
that
was
a
lot
of
fun
and
now
here
I
am
kind
of
directing
what
we
do
around
performance
in
red
hat
enterprise,
linux.
B
Oh
so
talking
about
performance
like
I
know
that
you
and
I
have
had
conversations
about
this
in
the
past-
about
what
performance
means.
So
what
does
performance
mean
to
you,
carl
abbott,.
C
As
you
know,
this
is
a
lot
the
the
x86
world
everything's
just
so
commoditized
and
there's
so
many
different
variants
of
pieces
of
hardware
and
all
these
different
things
and
you
have
to
bring
them
all
to
work
together.
You
know
everybody's
always
talked
about
how
great
like
macbooks
work,
but
they
have
like
five
pieces
of
hardware
that
they
support
that
make
up
the
mac
line.
So,
of
course,
they're
able
to
real
quickly
tie
the
software
to
the
hardware
and
make
sure
it
all
works.
C
Well,
together,
we've
got
a
much
bigger
challenge
because
we've
got
you
know
literally
thousands
of
pieces
of
hardware
that
have
to
work
together
and
you've
got
to
make
sure
that
these
things
perform.
So
it's
it's
really
quite
a
game
to
to
make
sure
that
you're
shipping
a
kernel,
that's
not
only
going
to
be
secure
and
support
all
the
different
features
that
we
have
in
it,
but
it
is
also
going
to
perform
well
in
all
these
different
scenarios.
For
all
these
different
pieces
of
hardware.
B
Yeah
for
for
a
long
time,
I
worked
in
red
hat
training
and
certification
and
used
to
teach
their
performance
tuning
course,
and
people
would
always
invariably
bring
up
like
well.
When
I
ran
sun
equipment,
yada
yada
and
it's
like
yeah,
there
were
three
varietals
of
equipment
that
they
had
to
do
right.
So,
of
course,
they
could
make
super
optimized
applications
and
stuff
yeah
for
those
three
varieties.
B
A
C
Operating
systems
that
just
brings
up
one
of
the
things
that
we've
recently
been
talking
about
internally,
as
we
look
forward
to
rel
nine,
I
mean
we've
got
we're
considering
avx-2
enablement.
How
do
you
bring
abx
to
enablement
in
well?
The
easiest
way
to
bring
avx
to
enablement
is
just
to
compile
everything
with
avx2
and
be
done
with
it,
but
there's
so
much
hardware,
even
new
hardware,
today,
that's
still
being
created
that
doesn't
actually
support
avx2.
C
So,
all
of
a
sudden,
if
you
just
turn
on
avx-2
these
pieces
of
hardware,
just
aren't
going
to
work
with
your
your
operating
system,
and
you
know
that's
not
a
good
long-term
story
to
tell
somebody.
Well
we're
sorry.
This
thing
you
just
bought,
you
can
only
run
up
to
relate
on
and
it's
only
got
this
life
cycle.
You
can't
run
the
new
thing,
so
then
there's
another
way.
You
could
do
it
and
that's
that
glib
c
actually
allows
us
to
load
different
versions
of
libraries
based
on
detected
hardware
components.
C
So
we
could
actually
compile
everything
for
abx2.
We
could
compile
everything
without
avx2
and
we
could
ship
two
versions
of
the
library,
but
that
puts
us
in
a
situation
where
now
we
have
to
qe
two
completely
separate
operating
systems
that
we're
basically
shipping
at
the
same
time,
which
is
incredibly
expensive
to
do
at
the
scale
that
we
do
it
in
red
hat
enterprise
linux.
So
that's
a
very
from
a
business
perspective.
It's
almost
a
non-starter
because
it's
it's
such
an
expensive
lift.
C
So
what
we've
decided
we're
going
to
do
is
that
for
the
libraries
where
it
makes
the
most
sense
to
have
abx2
enablement,
the
teams
that
build
those
libraries
are
able
to
opt
into
building
both
an
abx2
and
a
non-abx2
version,
and
then
they'll
do
the
testing
for
that,
so
that
where
it
makes
the
most
sense
we'll
enable
it
as
opposed
to
trying
to
enable
it
across
the
release.
But
I
mean
that's
just
one
of
the
types
of
things
you
run
into
when
you've
got
so
many
pieces
of
hardware.
You
support
and
we're
not.
B
B
Exactly
I
was
just
gonna
say
that
you
know:
we've
also
got
you
know
power.
Was
it
power
little
indian.
D
C
B
C
Variants,
yeah,
yeah,
z
and
z
is
not
going
anywhere.
I've
heard
lots
of
calls
for
it's
time
to
kill
the
mainframe.
The
mainframe
is
dead,
but
man
people
keep
leasing
the
mainframe,
so
those
aren't
going
anywhere
and
now
we're
owned
by
ibm.
So
they're
really
not
going
anywhere.
C
B
So
one
of
the
things
that
I
always
hear
when
I
talk
with
people
about
performance
is
they'll,
say
something
like
application
performance
or
have
an
application
performing
badly
and
they
always
couch
that
in
terms
of
a
performance
task.
But
I
always
thought
of
it
more
as
a
troubleshooting
one,
which
one
is
it
carl.
C
Yeah,
that's
a
that's
a
tricky
one,
it's
a
beat
head
and
wall
and
repeat
I
mean
this
is
this?
Is
it's
tricky?
It's
kind
of
both
right.
I
mean
there's
the
performance
task.
If
I
need
to
see
how
it's
performing
to
get
into
that,
you
got
to
start
with
troubleshooting
right,
because
I
mean
there's
so
many
cases
where
a
misconfigured
component
is
causing
something
to
to
air
out.
C
Then
you've
got
the
scenario
where
there's
a
bug
in
the
code.
You
know,
memory
leaks
are
terrible
for
that.
You
have
a
memory
leak
that
over
a
two
or
three
week
period
of
time,
we've
leaked
all
the
memory
now
we're
having
to
swap
every
time
we've
got
to
go,
get
new
memory,
and
so
that's
causing
our
application
to
slow
down.
So
it
could
be
as
much
as
like
a
coding
error.
C
C
D
B
B
C
Yeah,
the
the
perennial
question
I
mean
you
know,
having
worked
in
red
hat,
support
I've
seen
more
than
my
fair
share
of
cases
that
started
computer
is
slow,
server
is
slow,
application
is
slow
and
that's
like
all
the
detail
you
get
and
it's
like
well
we're
going
to
need
to
answer.
We.
C
But
the
question
is:
what
is
slow,
you
know
and
that's
a
tough
one,
because
if
you
never
define
what
slow
is
and
you
never
define
what
normal
performance
is
you'll
end
up
chasing
your
tail
continuously,
because
you'll
you'll
get
you
know.
Okay,
it
took
10
seconds
to
do
that.
We
got
it
down
to
six
seconds.
Oh
well.
If
we
got
it
down
to
six
seconds,
maybe
we
can
get
it
down
to
three.
Maybe
we
can
get
it
down
to
two.
Maybe
we
can
get
it
down
to
one.
Maybe
we
can
do
that
with
zero.
C
Now,
how
much
faster
can
we
make
it?
You'll
never
get
out
of
that
loop.
So
you've
gotta
set
those
parameters
all
right,
it's
taking
10
seconds.
I
expect
it
to
be
done
in
two
seconds
or
less
that's
my
goal
and
then,
when
you
get
to
two
seconds
or
less
on
an
average
basis,
you
you
declare
success
and
you
don't
go
any
further,
because
there
is
a
law
diminishing
returns
that,
after
a
certain
point,
you're
going
to
spend
an
extraordinary
amount
of
time
trying
to
actually
address
the
issue
compared
to
what
you're
going
to
get
back.
B
Well,
that's
something
else:
I've
seen
customers
or
users
do
is
like
they
have
this
expectation
that
they'll
go
from
10
seconds
to
two
seconds
and
in
performance
world.
That's
like
a
huge
amount
of
difference
and
you're
you're,
probably
not
going
to
get
there
by
twiddling
some
stuff
in
process.
B
You
know
it's
it's
maybe
some
of
that,
but
then
it's
probably
also
like.
Let's
look
at
what
the
application
is
doing
and
whether
it
can
do
things
differently
in
order
to
meet
that
80
percent
improvement
in
in
through.
C
You
can
get
a
couple
percentage
points
better,
but
you're
not
going
to
get
this
crazy,
substantial
increase
unless
you've
just
got
something
completely
misconfigured
in
proxies
and
fixing
that
one
configuration
for
your
workload
makes
all
the
difference.
Those
those
situations
exist.
But
you
know,
if
you
have
a
well-running
workload
on
a
system,
you
go
to
tune
the
operating
system.
C
There
are
things
you
can
do
to
make
things
better
absolutely,
and
we
have
2d
profiles
that
really
show
what
are
what
we
believe
the
best
settings
for
these
things
depending
on
your
workload
are,
and
they
really
do
help.
But
it's
only
so
much
if
you
want
the.
B
C
B
Yeah
and
the
other
thing
with
like
process
is
you
know
it?
It's
all
a
balancing
act
right.
So
if,
let's
say
you're
running
a
database
and
you
read
some
performance
tuning
guides
on
database,
they'll
say
things
like
set
your
foul
right,
cache
to
be
overly
generous,
where
it
doesn't
flush
the
caches,
frequently
or
turn
turn
down
your
swappiness
so
that
it
uses,
or
it's
less
likely
to
utilize,
swap
space
for
all
that
big
anonymous
data
that
it's
got
out
there.
B
C
Yeah
yeah,
you
really
when
you
know,
we've
talked
about
like
a
perform
applications
performing
badly
the
troubleshooting
standpoint.
Now
we're
kind
of
moving
into
the.
I
want
to
optimize
the
stew
out
of
my
application
and
my
my
piece
of
hardware,
because
I
mean
you
know.
Arguably
the
idea
of
performance
tuning
is
to
get
the
most
out
of
your
hardware.
You
you
bought
this
hardware,
you
paid
for
it.
You
want
to
get
the
absolute
most
value
for
your
investment
and
absolutely
you
definitely
ought
to
be
able
to
do
that.
C
Yeah
stack
overflow,
that's
a
great
resource
for
getting
through
some
some
brick
walls,
but
a
better
resource
for
getting
through
those
brick.
Walls
is
actually
understanding
why
the
stack
overflow
answer
will
work
or
not
work
in
your
environment,
and
that
just
takes
experience.
So,
if
you're
beating
your
head
around
with
stack
overflow
well,
just
be
careful
because
you
may
learn
from
the
voice
of
experience
and
then
you'll
go
back
and
be
like.
That
was
a
wrong
answer.
B
Indeed,
well
one
of
the
things
I
really
like
that,
we've
done
with
rail
eight,
so
row,
seven,
we
had
things
like
like
system
tap,
which
we
still
have
in
rail
eight,
but
we
added
a
lot
more
tooling
to
to
gather
data.
I.
C
C
Yeah,
absolutely
no,
we've
done
a
lot
in
rel,
eight
one
of
the
technologies.
That's
come
into
the
kernel
space.
That's
really
quite
interesting.
It's
got
a
very
interesting
history,
too,
is
a
technology
called
ebpf
or
extended
berkeley
packet
filter
and
if
berkeley
packet,
filter
or
bpf
sounds
familiar
to
you
from
a
long
time
ago.
It's
very
much
been
part
of
the
the
stack
that
we've
shipped
around
tcp,
and
this
over
time
was.
It
was
just
there.
C
But
now
it's
a
virtual
machine
with
a
lot
of
instructions
that
runs
in
the
kernel
that
you
can
load
your
own
programs
into
and
what
that
does
is
it
allows
you
to
basically
write
kernel
code
that
you
want
to
run
and
and
just
inject
it
into
a
running
kernel
now
you're,
not
writing
that
code
in
c
that
you,
you
could
write
it
and
see,
but
in
general
you
know
a
lot
of
the
the
code.
That's
written
for
bpf
is
written
in
python
or
there
is
a
language
called
bpf
trace
where
you
can
write.
C
Bpf
trace
scripts
that
are
then
parsed
by
the
bpf
trace
engine.
So
these
are
you
know
using
python
and
bpf
tracer,
two
of
the
main
ways
that
you
get
that
done
and
in
rel
eight
we
do
have
ebpf
as
a
virtual
machine
inside
a
relay
and
then
there's
varying
technologies
that
use
ebpf.
So
there's
this
thing
called
xdp
or
extras
express
data
path
that
allows
you
to
get
faster
network
performance
than
going
through.
C
The
kernel
tcp
stack,
it's
not
quite
as
fast
as
kernel
offload
at
this
point,
but
it's
kind
of
a
nice
midway
point
and
the
nice
thing
about
it.
C
Is
you
get
all
the
protections
that
the
kernel
gives
you
that
you
typically
are
giving
up
when
you
do
kernel
bypass,
so
there's
there's
a
lot
that
it
does
to
make
sure
that
the
traffic
is
flowing
correctly
and
that
the
rules
are
not
being
violated
and
that
packets
are
getting
where
they
need
to
go
and
there's
just
a
lot
of
benefit
to
what
the
kernel
gives
you.
But
it's
slow
because
it
does
all
that
work
compared
to
kernel
offload
technologies.
C
So
xdp
is
kind
of
a
nice
midway
point
and
then,
on
the
performance
side
we
ship
in
rel
eight.
This
package
called
bcc
tools
which
allows
us
to
use
the
berkeley,
compiler
collection,
which
again
is
kind
of
how
we
get
python
code
down
into
bpf
scripts
and
and
basically
using
this
technology
to
be
able
to
write
some
pretty
quick
scripts
to
do
kernel,
work
and
what's
really
cool
about
that
is
that
you
can
go
in
and
say:
okay,
I
take
my
kernel
code
and
I
look
and
I'm
interested
in
this
function.
C
C
All
I
have
to
have
installed
in
user
space
is
the
the
bcc
libraries
so
that
I
can
load
those
bcc
tools
or
bpf
trace
the
language,
so
that
I
can.
I
can
do
that
and
on
red
hat
enterprise
linux.
We
absolutely
enforced
that
you
have
to
be
root
to
load
anything
into
the
bpf
virtual
machine
because,
let's
face
it,
that
would
be
a
huge
security
risk
to
let
just
any
joe
user
load.
C
Into
the
running
kernel
that
they
wanted,
so
we
have
a
set
of
scripts
in
bcc
tools
and
they
live
at
user
lib
bcc
tools
and
we
also
ship
very
similar
tools
with
bpf
trace,
showing
you
how
to
write
the
same
tool
with
the
bpf
trace
language
or
with
bcc,
so
that
you
can
kind
of
have
your
choice
of
of
library
pick
and
there's
some
pretty
neat
stuff
in
there
one
of
my
favorites-
and
I
don't-
I
don't
think
I
use
it
the
way
the
tool
was
intended
to
be
used,
but
it's
a
fascinating
tool.
C
Is
this
one
called
get
host
latency?
Actually
let
me
let
me
go
share
my
screen,
real,
quick,
let's
share
screen
and
let's
come
over
here.
Let
me
change
my
youtube
demo
and
on
here.
If
I
do
user
share,
bcc
tools.
C
C
If
I
just
start
running
it,
it's
going
to
show
me
time:
pid
command,
latency
in
milliseconds
and
hosts,
and
what
this
is
doing
is
this
is
tracking
what
the
kernel
is
is
doing
to
go
get
when
it
goes
to
resolve
host
names.
So
let's
go
to
something
like
wral.com.
Well
now,
what's
really
cool
about?
This
is
check
that
out.
C
C
They
do
they
do
and
it's
shocking
just
how
many
host
name
requests.
C
C
You
know
you're
out
at
cloudflare,
you're
out
at
amazon,
you're
out
at
google
you're,
sending
your
information
about
this
request
to
all
these
different
places,
and
I
mean
we've
all
read
the
stories
about
how
you
know
our
privacy
is
under
attack
and
how
all
these
different
monolithic
companies.
You
know,
they're,
always
out
tracking
what
you
do
they're
tracing
what
you
do
well
with
git
host
latency
running
on
a
computer
and
then
doing
your
normal
browsing.
C
You
can
get
an
idea
of
just
how
many
times
you're
actually
hitting
and
sending
some
type
of
request
to
these
places.
Fonts.Googleapis.Com
I
mean
that's
just
loading
the
google
fonts,
but
nonetheless,
you
know
that
every
single
time
that
a
google
phone
is
loaded
over
at
google
they're
doing
some
type
of
tracking
on
oh
yep.
This
person
used
this
font
up
this
font's
popular
you
know
and
and
they're
pulling
it
all
together.
D
C
Yeah,
it's
just
kind
of
interesting
that
you
can
see
that,
but
you
saw
in
real
time.
I
mean
this
is
really
just
tracking
a
couple
of
kernel.
Sys
calls
and
it's
reporting
on
what's
actually
going
through
the
purpose
of
this
tool,
though,
and
the
performance
aspect
of
it
is
that
it
tells
me
the
latency
of
these
requests.
So
if
all
of
a
sudden,
my
internet's
going
slow
one
thing
I
could
do
is
I
could
pull
up,
get
host,
latency
and
just
see
are
my
host
resolution
times
taking
a
long
time.
C
Maybe
I've
got
a
dns
problem,
and
so
I'm
gonna
see
that
through
here
you
know
maybe
all
of
a
sudden.
These
are.
These
are
good
times
for
the
most
part.
You
know
a
lot
of
them
are
sub
10,
second
or
10
millisecond
latencies.
But
if
I
start
getting
like
200
300
millisecond
latencies,
I
know
I've
got
a
problem
and
I
need
to
go
look
into
that,
but
I
I
used
this
tool.
C
There
was
one
time
I
was
running
this
tool
on
a
real
box
and
all
of
a
sudden
I
noticed
that
we
were
connecting
to
static.redhat.com
and
I
went.
Why
is
my
box
where
I'm
not
really
going
anywhere
connecting
to
static.redhat.com?
You
know.
C
Am
I
doing
a
call
home
what's
going
on
here
and,
and
so
I
dug
in-
and
I
found
that
we
do
have
a
package-
that's
specifically
for
laptops,
so
I
had
the
workstation
stuff
installed,
and
so
I
got
this
package
and
it's
basically
for
wi-fi
networks,
so
that
network
manager,
if
you
have
it
enabled
network
manager,
will
call
home
basically
every
five
minutes
and
if
it
gets
an
okay
message,
it's
just
a
text
file
on
redhat.com.
It's
it's
nothing
too
exciting,
but
if
it
gets
that
okay,
it
knows
it's
on
the
internet.
C
C
And
then
lab,
is
it
labs.redhat.com
or
lab.red.
C
C
And
this
one
is
live.redhat.com
ebpf
tracing
ebpf,
hyphen
tracing,
I
should
say-
and
what's
nice
about
this
scenario?
Is
it
really
gives
you
kind
of
a
quick
look
at
what
is
included
things
you
can
do?
What
we're
going
to
do
in
this
is
we're
going
to
use
bcc
tools
and
you're
going
to
look
at
tcp
connections
with
git
host,
latency
and
tcp
life,
which
tcp
life
is
a
pretty
nice
pretty
nice
tool
for
network
connection
stuff.
C
C
So
if
you
want
to
start
tracing
operations
slower
than
one
millisecond
you,
you
can
do
that
and
then
cache
that
to
look
at
memory
access
and
just
to
kind
of
get
an
idea
of
what's
going
on
with
memory
and
the
tool
that
we
actually
trace
in
this
is
yum,
so
you'll
notice
that
I
have
a
bunch
of
different
terminals.
Here
in
this
lab
environment,
I've
got
one
for
just
getting
things
started.
C
Then
I've
got
one
for
running
all
these
different
tools
in
and
the
very
first
thing
we're
going
to
do
is
we're
going
to
install
bcc
tools
and
so
we'll,
let
that
go
ahead
and
run,
and
it
goes
and
it
gets
devel
for
the
kernel
that
I
need,
because
that
is
a
requirement
of
the
bcc
tools
package,
because
basically
it
takes
those
scripts.
It
compiles
them
down
into
the
language
that
ppf
needs
and
to
to
get
that
done.
It
does
need
those
files
from
kernel
develop.
B
And
in
that
ql
there,
it
just
is
to
show
that
there's
a
lot
of
tools
it's
over
200.
Now,
if
I
remember
correctly,.
C
Well,
that's
the
great
thing
about
the
bcc
tools.
You
know,
we've
talked
all
this
kernel
language.
No,
if
you
know
what
you're
doing
in
the
kernel
and
you
take
the
kernel
source
code,
the
very
fact
of
the
matter
is
these
tools
all
have
man
pages.
They
tell
you
what
they
do.
They
tell
you
what
parameters
you
can
pass
into
them.
You
don't
need
to
know
how
to
write
a
single
bit
of
kernel
code
to
use
these
tools.
C
B
That's
something
where
I
think
we
did
a
better
job
with
this
than
or
it's
a
more
approachable
tool
than
system
tap,
because
system
tap
sure
can
do
pretty
much
anything
you
want,
but
you
have
to
write
it
where
here
it's.
C
Like
we
had
a
few
examples
programs,
but
nothing
like
the
north
of
100
programs
that
we
have
here
right
and
so
I've
just
installed
bpf
tool,
which
is
a
utility.
That's
going
to
help
kind
of
show
you
what's
running
at
this
point.
You
can
see
bpf
tool.
Prog
list
comes
back
empty
because
nothing's
running
now
we're
going
to
run
git
host
latency
in
the
git
host,
latency
terminal
and
that's
going
to
get
started.
C
And
basically
it's
we've
already
seen
that
tool
that
I
showed
you
on
fedora,
then
we're
going
to
run
tcp
life
in
the
tcp
live
terminal
and
it's
going
to
be
looking
at
tcp
connections
as
they
happen
live
you
can
see
there
there's
an
sshd
connection
that
lasted
for
293.57
milliseconds,
in
which
two
kilobytes
were
transferred
and
three
kilobytes
were
received.
B
And
actually
that
was
your
your
execution
of
that
command
being
pasted
in
this
terminal.
Yes,.
C
C
C
C
We
said
that
now
now
it's
going
to
like
make
fools
of
us:
xfs
slower
tracing
xfs
operations
slower
than
10
milliseconds,
we're
not
doing
any
real
disk
work
right
now.
So
that's
that's
going
to
be
empty
and
then
cached
that
cache
that
didn't
kick
that
off.
Let's
do
this
user
share,
bcc
tools,
cache
that
and
then
that's
just
gonna
start
showing
us
hits,
misses
dirties
the
hit
ratio,
our
buffers
in
megabytes
and
are
cached
in
megabytes.
C
So
basically,
at
this
point
just
to
kind
of
give
everybody
a
general
understanding
of
what's
going
on
here
is
in
linux.
Linux
likes
to
use
cache
memory
for
pretty
much
everything.
So
if
you've
ever
gone
and
looked
at
a
linux
system
and
you're
like
I
have
no
ram
what's
going
on,
make
sure
you're
also
looking
at
the
cache,
because
it's
a
common
complaint
that
linux
ate
my
ram,
it's
got
100
of
my
memory.
It's
all
used.
What's
going
on,
it's
probably
in
the
cache
linux
pre-allocates
caches.
C
It
then
uses
caches
for
memory
because
they're
just
they're
faster.
So
what
you're
seeing
here
is
how
many
hits
am
I
doing
in
the
cache?
How
many
misses
do
I
get,
because
if
I
need
to
go
get
memory
and
it's
not
cached,
then
I
actually
have
to
go
to
the
memory
and
that's
a
little
bit
longer
of
an
operation
than
if
I
can
just
get
it
from
my
cache,
because
my
cache
is
faster.
C
C
But
what
we're
going
to
see
in
the
yum
update
is
that
once
we
start
to
get
to
the
installation
of
packages,
because
you
know
yum,
it
goes
out
there.
It
says
what
packages
do
I
need
and
it
figures
that
out
and
then
it
goes
and
it
downloads
all
these
packages,
and
once
it
has
these
packages
downloaded
it
works
to
prepare
a
transaction
and
then,
when
the
transaction's
been
verified,
it
executes
the
transaction
which
involves
going
and
installing
all
the
software
out
of
these
rpms
running
the
scripts
etc.
C
C
C
Yep
and
then
installing
files
around
and
it's
putting
them
in
the
right
place
a
lot
of
times.
These
post
scripts
call
commands
that
do
further
disk
work.
So
it's
it's
fairly
disk
heavy
operation.
And
then,
if
I
do
my
bpf
tool,
prog
list,
we
can
see.
We've
got
a
lot
of
different
ppf
things
loaded
at
this
point,
and
it
tells
you
what
we've
got
everything
loaded
in
here,
but
basically
this
is
going
to
show
you
what
you've
got.
C
So
if
you
get
to
a
system
and
you're
wondering,
is
there
bpf
scripts
running
because
your
bpf
scripts
can
change
kernel
behavior?
So
you
do
want
to
know
that
bpf
scripts
are
running,
especially
when
you're
dealing
with
express
data
path
that
intercepts
network
traffic
and
can
manipulate
network
traffic.
So
you
know,
if
you're
running,
that
you
want
to
be
able
to
see
that
that's
in
play
on
the
system
so
that
you
don't
just
assume
that
you've
got
a
straight
linux
kernel
with
everything.
The
way
you
would
expect
it
to
be.
B
C
C
Yeah
they
can,
they
can,
and
you
know,
we've
got
a
lot
of
k,
probes
and
trace
points
in
here
and
in
this
example,
and
in
this
example,
we
really
are
only
working
with
kernel,
probes
and
trace
points,
because
we're
kind
of
looking
at
what's
going
on
in
the
kernel.
But
another
really
cool
thing
that
you
can
do
with
bpf
is
you
can
handle
user
space
probes?
C
All
right
so
now
we're
going
to
actually
run
our
yum
update
and
I'm
not
going
to
hit
the
button
just
yet
because
there's
a
handful
of
things
that
we
want
to
look
for
when
we
do
this,
this
yum
update
and
that
is
that
we're
going
to
see
the
activity
on
get
host
latency
when
we
go
to
tcp
live
we're
going
to
see
that
activity
as
well.
Let's
see
do
we
have
output
from
tcp
life?
Yes,
we
do
so.
The
interesting
thing
with
tcp
life
is
remember.
C
Tcp
life
tool
is
about
showing
you
the
entirety
of
the
connection.
So
how
long
was
that
connection
alive?
That
connection
was
was
made.
Here's
where
it
was
made
from
into
and
here's
how
much
moved
over
that
connection,
here's
how
long
it
was
live.
So
when
we
do
this
on
yum
we're
not
going
to
immediately
get
feedback
on
the
tcp
live
page,
because,
basically
it's
going
to
open
up
one
connection
to
use
to
download
the
packages.
C
So
when
actually,
it
looks
like
we'll
make
three
connections,
but
you
get
the
idea
we're
only
going
to
make
one
set
of
connections
and
that's
going
to
hold
you'll,
see
that
we're
really
at
about
36
seconds
on
that
connection,
so
you'll
be
able
to
see
that
and
then
the
cache
stat
terminal
you're
going
to
see
output
similar
to
this
there's
we're
not
going
to
get
any
hits
and
misses
at
that
point.
C
Once
we
move
into
installing
the
updates
and
removing
the
old
packages
we're
going
to
start
looking
at
file
top
xfs
slower
and
cache
nat,
because,
like
scott,
was
saying
we're
going
to
be
doing
a
lot
of
disk
work
at
that
point,
file
top
is
going
to
start
showing
yum
reading
and
writing
all
over
the
place.
We're
going
to
see
all
sorts
of
files
because
you're
unpacking
files,
you're
moving
them
around
the
system.
It's
it's
going
to
be
very
busy
and
then
over
in
the
xfs
slower
terminal.
C
C
C
C
C
C
This
yeah
so
yeah
file
top.
You
can
see
it's
writing
or
reading
in
this
case
we're
doing
some
hard
linking
and
we're
doing
some
reading,
xfs
slower.
You
can
see.
We
are
hitting
operations
that
are
slower
than
that
and
we
we
definitely
do
have
those
misses
there
and
there
you
can
see
now.
Debt
mod
is
working
on
stuff.
Cpio,
that's
going
to
be
unpacking
the
rpm,
so
that's
an
rpm
getting
unpacked
and
you
can
watch
it
in
real
time
as
it
unpacks
that
rpm,
what's
really
kind
of
cool
about.
C
That,
is
that
if
you've
got
an
application,
you
know
we've
talked
about.
How
do
you
know
what
your
application's
doing?
What's
being
the
performance
problem?
Well,
if
you
come
in
here
and
you've
got
something
that
just
keeps
trying
to
read
or
write
the
same
file,
it's
just
trying
and
trying
and
trying
you're
going
to
start
to
see
certain
things
come
into
play
here
and
if
you
know
your
application
or
you're
familiar
with
what
you're
doing
you
may
be
able
to
go
wait
a
minute.
C
I
I
shouldn't
be
sitting
here,
doing
this
much
activity
to
this
file.
What's
going
on
same
story
with
like
xfs
slower,
as
you
can
see,
I
mean
we're
certainly
hitting
this
well
what
what
commands
are
hitting
my
discs
the
hardest
and,
obviously
right
now
you
know
cpio
operations
are
hitting
us
the
hardest.
So
if
we
determined
that
we
had
a
disc
performance
issue-
and
you
know
this
is
again
going
back
to
the
idea
of
you-
need
to
determine
what
slow
is
and
what
is
normal.
Maybe
this
is
fine
performance.
C
C
Oh,
I
don't
know,
say:
10
15
milliseconds,
I'm
going
to
be
a
little
bit
concerned
that
I
have
all
these
latencies
that
are
longer
than
10
milliseconds,
because
it
means
that
operations
in
my
application
that
are
trying
to
work
through
the
kernel
are
not
getting
back
in
time.
For
me
to
be
able
to
talk
back
to
my
users,
so
you
really
do
have
to
say
what's
slow
and
what's
normal
because
in
this
case
you
know,
is
it
slow
kind
of
does
it
matter?
Not
really
it's
a
lab.
It
lets
you
play
and
see.
B
But
the
databases
we're
going
to
go
to
disk
resolution
and
like
some,
some
more
modern
javascript
frameworks
and
whatnot,
will
only
wait
a
certain
period
of
time
and
then
they'll
kill
the
connection
with
that.
C
B
C
Yum
update
and
it
would
be
downloading
packages
for
at
least
a
minute
or
so
so
that
you
could
watch
that.
But
you
can
see
now
that
we
were
in
since
we're
in
milliseconds
we're
basically
right
under
three
seconds,
which
is
why
I
didn't
catch
it,
because
in
three
seconds
I
had
already
flipped
over
and
was
confused
as
to
why
now
all
of
a
sudden
we
were
seeing,
misses
and
it
looked
like.
It
was
installing
and
you'll
see
here
that
we,
this
is
in
kilobytes,
so
yeah
maybe
do
math
we
pulled
down
roughly.
C
But
so
you
can
see,
I
mean
it's
important
to
to
define
your
terms
but
yeah.
No,
so
that's
an
example
of
just
how
powerful
a
handful
of
tools
that
you
can
use
in
the
bcc
tools.
Suite
can
be
for
getting
an
understanding
of
what's
going
on
on
your
system,
and
you
know
certainly
there's
a
scenario
where
you
go
to
a
system.
You
know
it's.
It's
very
rare
that
people
these
days
are
running.
You
know
just
10
20
systems,
usually
they're
running
farms
of
hundreds
or
thousands
of
systems,
and
you
go
to
that
system.
C
B
All
right,
so
we
put
this.
B
This
lab
together
called
red
hat
enterprise,
linux
with
sql
server,
column
stores,
but
it
also
uses
the
bcc
tools,
so
I've
gone
ahead
and
like
already
installed
sql
server
and
it's
got
its
database
going
whatnot,
but
there's
a
bcc
tool
called
cpudist,
which
shows
you
the
distribution
of
how
long
things
are
sitting
on
the
cpu.
B
B
It'll,
take
a
second
for
it
to
to
finish
and
then
break
down
in
a
histogram.
What
that
looks
like,
but
we're
doing
like
a
five
million
record
query,
and
you
can
see
that
the
majority
of
the
calls
happen
here
in
like
the
what
four
four
to
eight
milliseconds
microseconds.
Thank
you
and
this
system
is
not
running
a
tundy
profile
that
we
set
up
for
its
sql
server
workload.
A
A
B
Chris
short,
there
we
go,
you
can
see
that
it
not
only
gets
a
better
distribution
across
how
long
it
takes
to
resolve
the
the
call,
but
it's
actually
the
majority
of
the
calls
are
being
serviced
much
faster
than
what
they
were
before
right.
So
just
here's
where
we
were
before
that's
the
tune
d
and
here's
where
we're
at
after
the
2d
nice
and
the
actual
thing
that
was
changed.
That
makes
that
difference
is
bear
with
me.
B
Is
this
good
granularity
and
I'll?
Actually
both
these
guys?
So
what
these
two
parameters
do?
Is
they
change
the
frequency
at
which
we
check
to
see
if
the
kernel
can
be
context
switched
right
if
the
job
running
in
the
kernel
is
finished
and
if
so,
we
can
schedule
the
next
job
on
it
now,
in
this
case,
because
we're
doing
a
lot
of
small
queries
when
we
check
we
wake
up
check
we're
like
oh,
okay,
cool,
we
can
put
the
next
thing
on
and
that's
what
causes
that
distribution
to
skew
lesser.
B
B
Exactly
so
I
mean
why
can't
I
just
do
one
thing
carl
and
make
it
better
for
everything.
C
Well,
if
we
could
do
that,
then
we'd
be
able
to
ship
raw
with
one
set
of
performance
defaults,
and
it
would
just
work
for
everything.
It'd
be
great.
B
I
went
proxis
go
fast.
C
Absolutely
we
have
we
rebased
to
the
502
upstream
performance
co-pilot,
which
supports
open
metrics
as
a
data
format
that
goes
out
across
pm
proxy,
which
is
you
can
connect
to
a
performance,
copilot
node
on
port
4432
and
you're.
Actually
getting
this
pm
proxy
stream
of
openmetric
data
and,
what's
really
cool
about
that,
is
that
other
tools
out
there
also
support
the
openmetric
format
like
prometheus.
C
So
if
you're
running
the
latest
version
of
prometheus-
and
you
want
to
see
your
hosts
that
are
running
performance
copilot,
you
can
just
point
prometheus
at
port
44322
metrics
on
those
hosts.
If
you
have
pm
proxy
set
up
and
voila,
it's
just
going
to
suck
down
all
the
metrics
from
performance
copilot
and
render
all
of
that
within
your
prometheus
environment
and
there's
there's
other
tools
that
are
using
openmetric
as
well.
C
So
you
start
getting
all
these
different
tools
that
can
talk
to
each
other
and
share
metric
data,
and
that's
that's
pretty
exciting
stuff,
but
also
in
raleigh.
We've
got
a
newer
version
of
grafana
and
actually,
let
me
share
my
screen
again
and
I'll
do
a
different
demo.
What
I'll
show
you?
I've
got
two
nodes
here:
I've
got
pcp
rail83,
demo,
mpcp
rail83
demo,
two.
What
I'm
going
to
show
you
real,
quick
with
rel
eight
is
we're
gonna
set
up
performance
copilot!
C
Since
we've
got
a
two
machine
estate,
we
can
get
this
done
pretty
fast
and
we
can
knock
it
out
when
you
get
into
having
a
lot
more
machines,
there's
a
little
bit
more
configuration
involved
in
getting
it
set
up,
but
once
you
get
it
set
up
and
going
you're
able
to
then
go
to
a
dashboard
and
see
historical
performance
trends
for
pretty
much
any
machine
in
your
estate.
So
let's
get
going
the
main
command
in
performance
copilot
to
see
the
metrics
is
pm
info.
C
There's
a
lot
of
metrics.
If
I
pipe
this
through
wcl
you'll
see
that
on
this
box,
I've
got
2170
metrics
installed.
That's
this!
I
can
just
immediately
go
get
the
data
on
what
is
this
performance
metric
right
now,
just
for
having
the
tool
beats
installing
about
15
different
tools
to
to
get
that
done,
I'm
going
to
go!
Look
at
just
what's
in
kernel
and
there's
151
items
with
the
word
kernel
in
them,
so
there's
all
sorts
of
different
things.
You
can
do
there.
C
Some
metrics
will
come
with
units
so
that
you
can
do
comparisons,
but
this
one
does
not
to
actually
see
this.
I'm
going
to
use
the
pmrep
or
pm
report
command
and
I'm
going
to
look
at
kernel.load,
I'm
going
to
ask
for
five
samples.
What
it's
going
to
do
is
every
second,
it's
going
to
print
a
sample.
It's
going
to
do
that
five
times
and
I
don't
really
have
any
load
on
this
box.
C
Obviously,
so
I
get
straight
zeros
all
the
way
across,
but
with
performance
copilot
I
can
just
immediately
start
grabbing
performance
data
off
of
the
box.
So
we
talked
about
wanting
to
set
up
pm
series.
If
I
go
to
etsy
pcp
pm
series
pmseries.com,
I
have
two
things
in
here
that
I
need
to
change
for
one.
C
C
Correct
pmcd
is
the
main
performance
copilot
service,
that's
the
the
main
piece
of
it
all
pm.
Logger
is
what
actually
works
for
logging
that
performance
data.
So
you
know
you
could
just
run
pmcd
without
pm.
Logger
and
you'll.
Never
capture
anything,
but
you
could
go.
Grab
live
data
with
that
pm.
Logger
allows
you
to.
You,
know
log
out
that
data
to
different
places
and
then
pm
proxy
is
what
allows
you
to
basically
connect
in
on
that
44322
endpoint
and
actually
get
metrics
data
out
of
the
system.
C
So
if
we
move
over
to
the
demo
2
box
again,
like
I
said
we're
going
to
need
to
allow
this
thing
to
be
connected
to
by
remote
systems
and
so
to
do
that,
we're
going
to
change
this
pmcd
local
from
one
to
zero
in
etsy,
sysconfig
pmcd,
then
we're
gonna
go
ahead
and
open
some
ports
in
the
firewall
here
as
well,
and
we're
gonna
open
up
pm
proxy
and
we're
also
gonna
open
up
pmcd
so
that
it
can
make
a
direct
connection
from
one
performance,
copilot
daemon
to
the
other,
and
then
we're
going
to
reload
our
firewall.
C
C
C
You
can
see
that
there's
some
data
that's
been
stored
already
for
that
and
I
can
come
into
a
pm
logger
configuration
because
this
is
where
I'm
going
to
ask
it
to
go:
get
the
logs
basically
from
demo,
2.,
control.d,
remote
and
I'm
gonna
put
this
in
here
and
a
lot
of
this
is
just
I'm
gonna
just
change
my
host
name
here,
I'm
gonna
tell
it
to
go
to
pecp
real
83
demo
two
and
it
needs
to
store
this
in
pcp
or
l83
demo.
Two.
C
So
now,
if
I
restart
pmcd
and
pm
logger,
and
if
I've
done
everything
correctly,
I
should
now
be
able
to
actually
go
to
where
I
told
it
to
store
those
logs
and
var
lib
or
var
log.
Sorry
var
log
pcp
pm
logger
and
if
I
look
in
here
you'll
see,
I've
got
this
particular
host,
but
I've
also
got
rel83
demo.
Two
and
I've
got
files
coming
over,
so
it's
now
actually
taking
performance
copilot
logs
from
demo2
and
it's
storing
them
on
this
box.
C
So
now
I've
got
pm
series,
basically
keeping
historical
data
for
pcp,
rel83
demo
and
pcp
rail
83
demo
2.
and,
as
I
want
to
add
host
to
my
estate,
I
can
just
keep
running
the
same
things.
We
actually
have
some
system
roles
that
we
ship
with
red
hat
enterprise
linux.
To
make
doing
this
at
scale
a
lot
easier
than
having
to
you
know,
try
and
do
the
automation
yourself.
So
another
called.
B
C
Yep
another
good
plug
for
using
red
hat
enterprise,
linux
tools,
because
we've
tried
to
do
all
the
automation
for
you
and
then
here
I'm
going
to
go
ahead
and
enable
grafana
so
that
on
a
reboot
I
would
have
it
and
I'm
going
to
start
graffana
server.
C
B
C
C
So
basically
we're
going
to
pull
data
from
pm
proxy
into
grafana
and
the
data
source
is
working.
So
I'm
going
to
come
over
to
dashboards
manage
and
then
I'm
going
to
go
to
redis
host
overview
and
what
you're
going
to
see
here
is
it's
got
data
out
to
the
last
six
hours.
So
I'm
going
to
change
the
data
to
the
last
five
minutes
and
in
changing
the
data
to
the
last
five
minutes.
C
I
can
now
go
pick
which
host
in
my
series
I
want
to
get
the
data
for
so
I
can
switch
over
to
demo
and
I
can
see
what's
been
going
on
on
pcp
rel83
demo
or
I
can
switch
and
see
these
charts
for
pcb
rel83
demo
too,
and
because
of
the
way
that
grafana
is
pretty
nice
and
being
able
to
actually
like
time
slice
and
come
to
exactly
the
time.
C
I
want
to
look
at
if
I
had
a
known
performance
problem
on
a
known
host,
and
I
had
all
this
stuff
set
up,
I
could
just
come
in
here.
I
could
go
to
the
host.
I
could
find
the
time
slice
and
I
could
go
wow
okay.
This
is
what
I
see
some
performance
problems
here.
Let
me
dig
in
at
this
particular
time
around
something
like
memory,
utilization
or
the
disk
utilization.
C
So
that's
grafana
in
relay.
A
C
Correct
yeah
we've
I
mean
if
you
look
at
what
we
included
in
rel7
for
grifana.
It
was
a
much
older
version
of
grafana
and
the
jump
from
seven
to
eight
is
huge,
but
then
with
eight
two
and
eight
three,
we've
definitely
done
quite
a
bit
and
in
8.4
for
those
of
you
that
are
microsoft,
sql
server,
nerds
we're
actually
having
a
microsoft,
sql
server
dashboard,
that's
going
to
ship
with
our
grafana
as
well.
So
really
can.
A
But
no!
This
is
great
right
like
if
anybody
ever
asked
me
a
performance
question
about
religion.
I'm
just
gonna
send
them
straight
to
this
video.
So.
B
A
A
Please
join
us
tomorrow,
where
we
will
be
doing
a
significant
amount
of
streaming
as
well,
we'll
be
kicking
it
off
in
the
morning
with
the
open
shift
container
storage
office
hours.
So,
if
you've
got
storage,
questions
which
we
had
earlier
today,
bring
them
there
and
we'll
get
them
answered
for
you
and.
B
In
two
weeks,
we'll
have
another
episode
of
red
hat
enterprise:
linux
presents,
which
will
be
mark
thacker,
who
is
our
product
security,
product
manager?
That's
awesome,
so
yeah.