►
From YouTube: Red Hat Enterprise Linux Presents (E03): Data Lifecycle, New Storage Technologies and Techniques
Description
A show that features the people and technology that make Red Hat Enterprise Linux into the the world’s leading enterprise Linux platform.
A
Good
morning,
good
afternoon
good
evening
and
welcome
to
another
episode
of
red
hat
enterprise,
linux
presents
here
on
openshift
tv.
I
am
chris
short
executive
producer
of
openshift
tv.
I
am
joined
by
the
one
and
only
scott
mcbryan
co-host
of
the
show
today
and
scott
has
brought
along
a
special
guest.
Please
scott
introduce
yourself
and
then
you
know,
introduce
your
guests
as
you
may.
Please.
B
C
So
I'm,
as
you
said,
I'm
the
experienced
product
manager
and
I
work
on
two
key
areas.
One
of
them
is
kernel,
live
patching
which
is
being
able
to
patch
your
systems,
especially
for
security
vulnerabilities
without
having
to
reboot,
and
then
I
have
all
the
storage
that
lives
on
the
railhouse,
so
local
volume
management
capabilities
and
file
systems,
but
not
things
like
seth.
So
we
we
build
the
ability
to
store
things
on
a
railhouse,
basically
cool,
excellent.
B
B
But
bob
we
had
talked
before
the
show
about
where
we
should
start
and
we
wanted
to
start
on
live
kernel
patching.
So
what
is
it.
C
It's
a
system
that
allows
you
to
instead
of
having
to
reboot
your
system.
When
you
get
a
cve,
you
can
inject
a
live
patch,
that
sort
of
intercepts
the
calls
to
the
function
that
you're
patching
and
uses
different
code
to
fix
the
security,
vulnerability
and
it's
efficient.
It
doesn't
really
cause
performance
problems.
It
works
great,
it's
very
reliable.
C
C
They
got
to
get
their
patches
installed
and
with
kernel
live
patching
you
can
get
that
extended
out
so
that
you
can
wait
until
your
next
maintenance
window
before
you
have
to
reboot
to
get
the
sort
of
on
disk
version
of
the
patch
and
just
the
other
day
we
had
a
presentation
about
customers
struggling
with
patching
overall,
not
specifically
live
patching
and,
as
we
build
out
a
simpler,
more
unified
way
to
patch
your
systems.
C
B
So
you
talked
about
how
it
operates
by
kind
of
putting
a
new
function,
binary
function,
call
into
memory
and
then
kind
of
rerouting
application
calls
to
it.
Is
there
a
performance
degradation
when
that
goes
into
place.
C
C
Very
few
things
in
the
world
you
can
say
that
about
I
mean,
or
maybe
the
more
precise
version
of
that
is
that
it
shouldn't
be
any
worse
than
the
actual
patch
right,
if
that's
a
better
way
to
put
it.
So
if
the
patch
itself
causes
a
performance
impact,
when
you
upgrade
the
kernel,
then
you
probably
see
something
similar
when
you
do
the
live
patch.
But
aside
from
that,
it's
it.
I
don't
think
that
the
executing
program
can
tell
the
difference,
so
it
doesn't
really
cause
any
impact
like
that.
B
C
Correct
and
that's
a
ridiculous
claim,
so
in
fact,
when
customers
talk
to
us
about,
why
don't.
B
C
Do
that
and
where
they've
listened
to
the
promise
and
they've
tried
it
and
they've
learned
that
that's
yeah?
That's
not
really
what
happens
now,
we're
learning
from
a
sort
of
a
how
people
do
this
in
the
world
perspective
that,
like
amazon
linux
came
out
with
their
version
of
this
and
for
any
given
kernel,
we
do
what's
called
a
one
year.
Look
back
so
from
the
moment.
You
start
running
a
kernel,
and
I
mean
like
a
z
stream
kernel
like
a
patch
kernel
or
a
minor
version
kernel
for
one
year.
C
So,
rather
than
rebooting
once
a
month
or
I
had
a
customer
that
was
rebooting
once
a
week
same
reason,
the
at
least
for
the
kernel
part
they're
getting
to
a
more
normal,
scheduled
maintenance
window,
and
they
can
have
some
predictability
in
operations.
Because
of
that.
B
Yeah,
so
one
of
the
questions
I
ask
everyone
in
in
our
program
is:
what's
something
you
see,
people
do
that
you
wish.
They
wouldn't
do,
and
I
will
ask
you
that
question
in
a
bit,
so
so,
okay
and
answer
your
back
pocket.
Well,
one
of
the
things
that
I
see
a
lot.
Is
they
organize
around
having
this
concept
of
zero
downtime
or
zero
maintenance
right?
That's
that's
the
most
ridiculous
thing
I've
ever
heard
in
my
life,
because
what
in
your
life
is
ever
maintenance-free
right?
You,
okay,
with
no
maintenance.
B
You
don't
live
in
a
house
with
no
maintenance.
You
know
you,
don't
you
don't
use
appliances
or
cookware
or
whatever,
without
maintenance,
correct
expecting
that
from
your
compute
resources
is.
A
C
You're
talking
about
marketing,
aren't
you
so
so
that's
exactly
right,
and
so
the
thing
is
we're
talking
about
enterprise,
data,
centers
and
and
cloud
users,
and
so
they
have
been
disabused
of
that
notion
by
life
right.
C
So
so
I
think
what,
with
our
thing
it
we're
saying,
we
know
how
your
life
works
already,
which
is
that
you
have
the
security
auditors
in
your
system.
You
have
something
monitoring
your
system
and
you
have
a
demand
from
cio
on
high
that
thou
shalt
maintain
your
data
center
this
way
and
what
we're
saying
is
we're
making
that
experience
better
now,
what's
interesting
in
terms
of
where
we
are
and
where
we're
going
to
go,
is
that
we
would
recommend
that
you
do
all
your
patch
management
using
satellite,
it's
the
it's!
C
It's
the
most
efficient
way
to
do
it
and
you
can
do
live
patching
through
satellite
too,
because
satellite
is
delivered
as
a
yummy
yum
update
and
it's
an
rpm,
but
we
don't
yet
have
it
fully
integrated
so
that
you
can
do
magical
things
with
satellite,
for
example.
So
the
the
control
plane
surround
for
live
patching
is
something
we're
still
building.
If
you
have
so,
we
have
a
very
reliable
feature,
that's
easy
to
use,
but
it's
not
as
automatable
as
it
will
be
at
some
point.
So
that's
kind
of
the
next
step.
C
C
C
B
Do
my
best
so
one
of
the
other
questions
I
get
a
lot,
especially
around
the
eight
three
release.
B
So
a3
is
our
first
non
extended
update,
support
release
for
for
a
while
for
relay,
and
so
we've
always
said
that
we'll
produce
live
kernel
patches
for
extended,
update,
support
releases
right,
but
for
eight
one
we
did
it
for
eight
two.
We
did
it,
but
for
a3
we're
not
planning
on
doing
it.
C
Right,
so
you
know,
if
you're
consuming
something
I
build,
you're,
not
interested
in
my
problems
and
you
shouldn't
be,
but
we
we,
the
one
thing:
that's
interesting
about
kernel
live
patching
and
it's
why
we
started
with
eus.
Only
is
there's
a
multiplier
effect
based
on
how
many
kernels
you
support,
and
we
have
to
have
development
resources
for
that,
but
probably
even
more
critically,
we
have
to
have
qe
resources
to
cover
that
so
we're
assessing
the
situation
as
the
sort
of
data
center
where
you
would
use.
C
This
would
also
be
the
sort
of
data
center
where
you
tend
to
stay
on
us.
That's
the
that's
the
working
presumption
of
the
program
we
have
had,
for
example,
this
this
started
with
seven
rail,
seven,
six
and
rail
seven.
Seven.
We
get
the
same
thing
where
it
doesn't
support
rail,
seven,
eight.
C
C
So
there's
really
two
things
going
on:
if
we're
gonna
we're
gonna
go
out
in
eight
three
without
supporting
kernel,
live
patching
and
then
we're
going
to
see
what
the
feedback
is,
because
the
other
interesting
component
of
all
this
is
it's
not
all
that
difficult
to
support
it
on
something
we
don't
currently
support
it
on.
So,
if
we
there's
a
human
cry,
that
people
really
really
need
this
on
eight
three,
because
they're
using
it
in
production,
we
can
actually
add
eight
three.
It's
just
that
we
don't
want
to
do
that
willy-nilly.
C
For
the
reason
I
said,
we
don't
wanna,
we
don't
wanna
blow
out
the
supported
kernel
matrix
unnecessarily.
So
I
guess,
if
you're,
if
you're,
consuming
this
content
and
you're
thinking
well,
I
can't
do
that
unless
it
supports
every
every
minor.
That's
feedback
that
we
want
to
hear.
A
Yeah
and
that's
good
feedback
to
provide
for
sure
yeah
so,
and
you
know
if,
if
you're
watching
right
now
feel
free
to
chime
in
and
say
you
know
no
like,
I
would
want,
like
you
know,
live
patching
and
every
single
release
right
like
that
would
be
something
we
want
to
know
from
our
viewers.
Oh
yeah,
like
and
and.
C
C
So
I
think,
though,
the
more
common
behavior
with
application
stacks
I
mean,
if
it's
just
upgrading
a
kernel,
then
you
can
just
do
whatever
you
want,
but
with
a
more
sophisticated
application
stacks,
the
customers
actually
have
to
decide
which
rail
version
they're
going
to
spend
the
time
to
qualify
their
applications
with,
and
I
think
that's
why
they
stay
in
the
us
because
once
they're
on
something
and
they
get
it
working
well,
sometimes
they
stay
on
it
and
sometimes
to
a
fault.
C
I
mean
every
customer
that
we
deal
with
still
has
systems
that
are
running
rail,
five
and
yep
right,
yeah
and
red
hat
linux.
Nine-
and
you
know
I
mean
you,
you
name
it
right.
There's
a
system
out
there
running
it
somewhere
and
adoption
curve
wise.
I
don't
know
we
don't
talk
about
this
publicly
at
on
in
any
detail.
I
guess,
but
it's
just
now
becoming
more
common
for
people
really
to
be
jumping
on
the
eight
train.
So
it's
probably
going
to
be
the
case
that
we're
okay
on
eus.
B
So
I
did
a
little
bit
of
of
sleuthing
this
morning
before
we
got
on
and
essentially
what
I
found
was
the
8.2
or,
if
you're
subscribed
to
the
kind
of
live
or
rolling
rel
8
repos.
C
B
So,
let's
say
just
for
argument's
sake
that
I
left
those
guys
in
there
in
the
live.
Updating
relate
repo.
Eight
four
comes
out
in
another
six
months
as
per
the
the
regular
cadence
that
we
do
now.
C
B
Do
kernel
life
patches
all
of
a
sudden
start
being
introduced
back
into
the
rel
eight
rolling
repo.
C
Yes,
yes,
they
do.
Yes,
they
do
that's
that's
one
of
the
interesting
aspects
is
that
it
was
a
unique
wrinkle
of
this
program
when
we
came
out
with
it
is
that
it
was
going
to
re,
require
major
devops
surgery
to
make
that
not
happen.
So
we
just
kind
of
rolled
with
it
and
I
think
there's
a
we
sometimes
will
call
it
a
feature,
because
if
you
can
imagine,
if
someone
takes
the
catacota,
they
want
to
take
the
next
step
and
try
it
on
some
actual
machines
in
their
in
their
data
center.
C
They
can
actually
do
that
without
access
to
eus
they
can,
they
can
experiment
if
they
have
one
of
the
eos
kernels.
They
can
use
those
patches
to
see
how
well
it
works
in
their
environment.
So
that's
the
upside
of
that,
but
it's
really
more
it's.
It
makes
more
sense
in
terms
of
the
behavior
of
delivering
patches
to
do
it
that
way,
and
then
because
it's
the
event
is
not
like
us,
creates
the
fork
once
it
begins
like.
If
you
will,
it
doesn't
exist
before
that.
So
that's
that's
the
rationale
and
then.
C
If
we
decide
to
do
the
non-eu
s
releases
as
well,
which
we
may
well
at
some
point,
we're
really
hoping
like,
if
you
think
like
up
and
to
the
right,
what's
the
ultimate
experience
you
could
create
with
something
like
this,
you
get
to
this
place
where
many
many,
if
not
most,
of
your
systems,
update
themselves
in
an
automated
sort
of
way
and
live
patching
is
assumed
and
because
it
tends
not.
If
you
determine
it's
not
problematic,
you
might
just
set
up
something
in
in
satellite.
That
says
sure.
C
I'm
gonna,
I'm
gonna,
create
a
an
autonomous
updating
linux
system
for
things
that
I
can
trust
to
that
sort
of
treatment
and
it'll
just
grab
those
live
patches
and
install
them,
and
I
won't
even
really
think
about
it
in
that
universe.
It
would
be
nice
if
it
was
doing
that
and
updating
itself.
C
You
know
I
mean
to
use
a
weak,
but
but
useful
example
is
like
compared
to
say
a
windows
update
if
you
turn
it
on
it,
just
kind
of
happens
that,
but
with
live
patches
is
really
really
it's
really
interesting,
because
you
could
automate
the
read.
You
could
automate
the
maintenance
windows
as
well,
and
you
just
have
a
whole
bunch
of
systems
that
four
cve
related
patches
are
just
sort
of
taking
care
of
themselves.
That's
really
kind
of
cool.
D
B
So
go
ahead,
I
was
about
to
change
gears,
but
I
wanted
to
do
a
last
call
on
current
live
patching
before
we
do
that.
C
Just
I
think,
if
anything,
we
would
just
want
more
people
to
know
about
it.
You
know
yeah,
so
it's
cool
and
it
works,
and
so
we're
just
waiting
for
the
adoption
to
pick
up
so
that
we
can
do
even
more
cool
things
with
it.
B
Awesome
so
the
other
kind
of
big
thing
that
that
you
manage
besides
kernel
life
patching
is
all
that
data
storage
for
rel
and
rel8
introduced
several
new
technologies
into
that
into
the
product.
So
I
didn't
know
if
you
wanted
to
talk
about
some
of
those
as
well.
C
Sure-
and
I
think
I
think
I'll
start
at
the
high
level
with
a
strategy,
so
I've
been
with
reddit
for
three
years
now
and
but
before
I
came
here,
there
was
a
volume
managing
file
system
and
tech
preview
called
butter
fs,
which
I'm
people
in
the
linux
community
are
familiar
with,
and
it's
an
interesting
set
of
technologies.
It
kind
of
has
a
similarity
to
zfs
where
it
it's
a
complete
storage
stack.
That's
that's
inclusive
of
all
the
pieces.
C
You
need
to
manage
from
a
disk
on
up
and
therefore
to
manage
something
like
that
requires
some
resources,
because
you
have
to
maintain
the
volume
management
function
and
the
snapshot
function
and
the
encryption
function
and
they're
all
unique
to
that
stack
right.
So
before
I
came
here,
red
hat
had
not
seen
sufficient
maturing.
I
guess
on
butter
at
that
point
and
decided
to
to
deprecate
it
and
then
we
removed
it
from
relate.
C
At
the
same
time,
there
was
a
program
emerging,
called
stratus
which
you've
heard
of,
and
you
have
a
demo
up,
and
the
idea
of
stratus
was:
let's
take
the
comparable
capabilities
that
exist
in
the
linux
stack
already,
whether
they
be
in
lvm
or
xfs,
or
you
know,
vdo
and
and
gradually
build
them
into
something.
That's
easy
to
use,
so
stratus
can
be
thought
of
as
sort
of
a
management
patina
that
sits
upon.
C
On
top
of
all
that,
that's
both
an
api
and
a
cli,
and
it
shows
you
a
file
system,
but
it
integrates
the
ability
to
do
compression
or
encryption
or
deduplication
without
having
to
then
dive
into
something
else
to
do
those
things
and
it
lets
you
do
two
things
at
once.
You
can
automate
the
best
practice
to
put
a
particular
piece
of
functionality
in
place,
and
then
you
can
get
this
sort
of
consistent
experience
when
you
go
to
manage
it.
C
Okay,
so
that's
that's
the
high
level
idea
and
the
strategic
idea
is
that
we
want
people
to
consume
these
functions.
My
screen
just
went
to
sleep.
We
want
people
to
con,
consume
these
functions
and
they
don't
now
like.
So
there's
really
a
dual
goal
here.
C
If
I
could
express
the
thought
in
one
word:
it'd
be
consumability
we're
trying
to
increase
the
consumability
of
the
storage
functions
that
already
exist
in
linux,
because
they're
diverse
and
complex
to
use-
and
you
know
the
historical
linux
data
center-
that
has
a
staff
that
can
go
out
and
create
and
build
things
out
of
different
erector
set
pieces.
That's
one
audience
and
for
those
people
they'll
take
the
time
to
learn
the
individual
sub
components
and
optimize
them,
and
that
and
that
world
will
still
exist.
C
The
second
world
is
I've,
started
a
company,
and
all
I
had
was
you
know,
a
credit
card,
an
aws
account
and
a
dream
right
and-
and
I
am
consuming
storage
in
that
environment
without
understanding
how
it
gets
created
and
as
a
hybrid
multi-cloud
vendor.
We
need
to
recreate
that
sort
of
experience
too.
C
So
we're
trying
to
create
a
world
where
you
have
a
file
system
capability
that
feels
more
cloud-like,
that
it
automates
the
underlying
sub-pieces
for
you
and
that
plugs
into
everything
that
red
hat's
trying
to
do
now
we're
trying
to
get
to
an
experience,
that's
less
complicated.
We
talked
a
little
bit
about
it
with
live
patching
a
few
minutes
ago,
but
we
want
to
do
the
same
thing
with
storage.
C
The
other
thing
that
this
approach
allows
us
to
do
is
if
I
have
to
build
snapshots
for
lvm
anyways.
If
I
have
to
build
encryption
for
a
volume
anyways,
I
only
have
to
do
that
work
once
I
don't
have
to
recreate
it
when
it
lives
in
a
file
system.
I
don't
have
to
rebuild
any
functions,
so
it
allows
us
to
neatly
divide
our
world
up
into
what
are
the
features
and
functions
that
a
user
needs
when
they
want
to
store
something
okay
and
then
we
build
it
in
the
right
place.
C
Okay,
but
now
it's
sort
of
fragmented.
The
way
you
present
this
functionality,
great
so
step
two
is
the
consumability
I
mentioned
so
build
the
feature
and
then
build
it
in
the
consumability
model,
so
that
someone
can
access
that
feature
without
really
fully
understanding
all
the
bits
and
bytes,
and
it's
going
to
be
the
80
solution.
C
In
all
cases,
I
mean
you're,
not
the
the
the
there's
reasons,
there's
other
reasons
why
people
get
to
know
these
technologies
and
deep
dive
on
them,
but
we're
assuming
that
there's
a
second
user
base
out
there
that
doesn't
want
to
know
all
that
and
for
that
user
base
we
have
stratus,
but
it
won't
just
be
strata.
Stratus
is
an
implementation.
C
This
philosophy
applies
throughout
we're
going
to
create
apis
that
that
have
this
logic
to
them
and
we're
going
to
take
every
function
that
we
build
and
apply
that
logic
on
top
of
it
as
well
and
you'll
get
a
similar
experience.
If
you
use
our
web
console
tool,
for
example,
there'll
be
an
api
that
there'll
be
apis
that
do
the
same
thing
and
just
take
the
stove
types
of
functionality
that
exist
in
linux
and
mold
them
into
something.
C
That's
really
really
easy
to
use
and
and
easy
to
use
is
great,
but
also
automates
the
best
practice
to
make
it
do
what
it
does
the
best
possible
way.
I
we
talk
a
lot
about
insights
and
how
insights
finds
configuration
errors
that
people
would
never
have
conceived.
They
had
and
then
provides
these
fixes,
and
everything
goes
better.
We
want
to
skip
over
that
piece
and
we
want
to.
C
C
Well,
it's
just
something,
but
here's
the
thing
right.
You
should
be
doing
it
anyways
we
I
I
had
an
interesting
experience
with
the
developers
when
I
first
started
working
here.
C
We
were
over,
we
have
a
development
facility
in
brunel
and
and
we
had
all
the
people
who
work
in
the
different
storage
pieces
in
the
world,
and
we
were
having
this
conversation
about
that
right,
how
that's
a
long
road
to
hoe
and
it
is
and
and
it's
going
to
take
a
while
to
build-
and
the
pushback
was
always
that
the
customer
sort
of
understands
the
erector
set
world
and
that
they
don't
really
need
this
abstraction
right,
and
I
would-
and
I
just
looked
at
them
all-
and
I
said
you
have
this
problem
already.
C
Any
customer
that
consumes
rel
has
to
figure
these
things
out
manually
today.
You're
pretending
that
that
combining
them
and
optimizing
them
into
one
thing
is-
is
an
unnecessary
step,
but
really
what
you're
giving
them
is
a
bunch
of
stovepipes.
Now
so
by
not
doing
that
you're
saying
I'm
not
going
to
develop
that
for
you
I'm
going
to
let
you
figure
it
out
yourself,
and
so
the
way
I
like
to
look
at.
C
If
we
haven't
been,
then
that's
a
bug
as
far
as
I'm
concerned,
so
it
enforces
when
you
start
creating
these
consumability
interfaces
that
combine
things.
It
enforces
that
discipline
on
red
hat
development
as
well,
and
we
make
sure
that
these
things
work
well
together
right
out
of
the
gate,
because
the
option
is
what
we
do
now,
which
is
I'll.
Try
video
great!
Now,
I'm
going
to
put
lvm
in
there
great
and
I'm
going
to
you
know
it:
it's
not
better
anyways.
The
problem
exists
whether
we
solve
it
or
not:
how's
that.
C
B
Technology
right
on
top
of
formatting
it
or,
I
should
say,
block
block
I
o
management
technology,
but
man
I
still
run
into
the
one.
That's
like.
Oh,
I
ran
out
of
space
in
my
root
file
system,
which
is
stored
on
a
partition
and.
C
Well,
it's
funny
because
that
comes
up
once
that's
one
of
the
nice
things
that
happens
when
you
start
developing
this
way
too.
Scott
is
that
you,
like
the
developers,
have
started
to
think
about
it.
That
way
like
well,
if
I
turn
on
dm
thin,
but
don't
use
it
to
make
the
volume
thinner
and
I
put
a
little
reserve
space
on
the
drive
when
I'm
running
anaconda,
I
open
up
a
world
where
I
can
turn
features
on
and
off
later,
so
part
of,
like
this
process
of
figuring
out
how
to
combine
things.
C
Is
the
process
to
figure
out
if
I
have
a
brand
new
system,
what
are
the
things
I
can
do,
so
I
don't
run
into
what
you
just
described
down
the
road.
That's
part
of
this
we're
definitely
definitely
thinking
about
all
of
that
to
say
what
things
will
have
to
turn
on
at
install,
even
if
they're
not
doing
anything
in
that
moment,
so
that
I
can
later
turn
a
knob
and
start
taking
advantage
of
the
actual
feature
and
an
example.
C
So
you
you
guys-
probably
I
don't
know
if
you
pay
attention
at
this
level
of
detail,
but
fedora
33.
Just
came
out
and
they
made
butter
fs
the
default
file
system,
which
is
a
really
good
decision
for
a
workstation
level
thing
for
a
for
a
a
pet.
If
you
will
not
a
great
not
as
good
for
cattle
but
great
for
pets
right
right
and
one
of
the
problems
that
ended
up
driving.
C
That
was
this
idea
of
I
partially
I
partitioned
my
drive,
and
now
I've
got
this
stranded
capacity
and
if
I
had
butter
fs
on
there,
it
would
just
have
this
big
pool
on
the
back,
and
I
wouldn't
have
that
problem
of
allocation
right.
Well,
okay,
that's
true,
and
butter.
Fs
doesn't
have
that
problem,
but
there's
no
reason
why
that
problem
has
to
exist
with
the
tools
we
have
either
there
are
things
we
can
do
within
the
context
of
lvm
and
xfs
to
solve
for
that
we
just
haven't.
So
it's
all
of
that.
C
It's
the
abil!
It's
the
idea
that,
if
we're
going
to
make
this
stuff
more
consumable
and
more
optimized,
we
should
we
should
do
it
because
it
needs
to
be
done
not
be.
You
know
not
because
someone's
using
butter,
fs
and
so
the
intent
here
is
to
make
a
list
of
every
time.
I
talk
to
a
customer,
that's
doing
something
a
little
different
than
what
we
do.
C
I
just
ask
them:
okay,
what's
the
experience
you're,
shooting
for
and
then
we've,
then
I
go
to
the
people
who
develop
the
different
pieces
of
storage
software
that
we
have
and
say?
Okay,
how
do
we
do
the
same
thing
and
stratus
will
have
those
capabilities
eventually
to
look
like
that
when
just
and
all
you'll
have
to
know
is
how
to
use
stratus,
I
guess
is
really
where
it
comes
down.
C
I
guess
when
we
get
to
where
we're
trying
to
get
it'll,
be
why
don't
you
just
use
satellite
to
run
your
live
patches
so
that
you
don't
have
to
reboot
so
often,
but
I
haven't
given
them
that
world
yet,
but
it's
going,
I
want
to
create
a
world
where
that
bothers
me
how's
that
and
then
the
same
thing
with
the
storage
stacks.
It's
like!
Oh
well,
you
know
I
have
this
and
this,
and
this
is
hard-
and
you
know
why
don't
you
just
implement
this
for
me
or
do
that
thing?
C
So
it's
a
it's
a
crazy
way
to
answer
your
question,
because
the
real
the
real
answer
is:
there's
all
this
cool
capability
in
the
linux
stack
in
the
linux
kernel
already.
Why
don't
you
use
it
and
the
reason
is
it's
too
hard
and
so
we're
going
to
make
it
less
hard
to
use
so
that
I
can
say
it
well?
Why
would
you
do
that?
Why
don't
you
just
use
blah.
D
B
And,
and
so
what
you're
saying
is
yes
we're
going
to
continue
to
do
things
where
you
don't
have
to
be?
You
know
a
member
of
the
system
administrators
guild
in
order
to
do
this
like
we're,
bringing
it
to
the
people,
so
they
can
manage
their
storage
themselves.
C
C
C
B
C
B
So
on
that
vein,
I'm
gonna
go
ahead
and
do
a
couple
of
our
already
made
demonstrations
on
a
couple
of
the
technologies
that
bob
talked
about
with
us
today.
The
first
one
is
chrome,
live
patching.
So
let
me
go
ahead
and
start
that
up
and
you'll
recall
that
what
we're
going
to
be
doing
is
applying
a
new
segment
of
compiled
binary
into
memory
which
will
replace
a
piece
of
the
kernel's
content.
That's
been
obsoleted
due
to
the
issuance
of
a
cve,
that's
either
critical
or
important.
B
B
So
the
first
thing
we
need
to
do
is
we
need
to
install
the
kpatch
executable,
and
this
is
going
to
give
us
the
software
that
we
need
in
order
to
apply
and
report
on
live
patches
on
the
system
all
right
and
if
we
run
a
k
patch
list
to
show
what
live
patches
are
there
you'll
see
that
there's
nothing
currently
loaded
or
installed
right,
because
we
just
put
k
batch
on
the
box
all
right.
B
B
So
in
the
case
of
route
82,
that's
the
418
0
193
series
of
kernels,
and
then
we
won't
see
this
again
until
the
kernel,
4
8.4
is
released
all
right,
so
I'm
going
to
use
yum
to
list
the
available
k-patch
rpms
that
are
out
there,
and
so
here
we
see
this.
One
is
for
the
rel
8.1
series
of
kernel,
and
here
is
the
roll
up
for
my
kernel,
the
193
series
of
kernel
that
shipped
with
8.2
all
right
and
these
are
shipped
as
rpms.
B
So
I'm
going
to
just
go
ahead
and
install
one
and
you
could
manually
type
out
the
one
that
you
want
to
do,
but
instead
I'm
going
to
use
a
little
bit
of
shell
script
magic
to
just
apply
the
one.
That's
that's
usable
by
my
kernel!
So
I'm
going
to
do
this.
Yum
install
k,
patch
equal,
u
name
dash
r,
which
will
pull
in
the
release
version
of
my
kernel
and
then
just
pull
in
the
k
patch
roll
up
for
that
release
of
the
kernel.
B
All
right,
so
it
pulled
down
it's
applying
it
to
the
system
notice
that
it
did
some
sim
linking
and
started
up
the
k-patch
service.
That's
so
that
if
I
reboot
the
machine
that
the
machine
will
come
up
and
automatically
load,
this
k
patch
back
into
memory,
so
it
will
continue
to
utilize
it
all
right
and
now,
if
I
do
a
k
patch
list,
I
can
see
that
the
k
patch
that
I
installed
is
now
loaded
into
memory
and
it
is
also
installed
on
the
box.
B
So
if
I've
been
doing
this
throughout
the
lifespan
of
my
rel
8.2
box,
I
could
have
installed
several
different
k
patches
over
time
and
so
I'd
be
able
to
see
which
ones
were
loaded
and
which
ones
were
just
installed
and
available
in
the
system.
A
Just
out
of
curiosity,
is
there
like
a
k
patch
cleanup
command
like
there
is
with
the
other
kernel,
you
know
command
the
yum
cleanup
command.
B
So
it
is
an
rpm,
so
you
can
remove
it
and
that
deletes
it
from
your
system
beautiful.
However,
while
it
will
remove
the
k-patch
content
from
your
system,
we
currently
recommend
that
you
do
a
system
reboot
in
order
to
make
sure
that
it's
fully
removed
from
memory
as
well
makes
sense.
So,
okay
got
it
that
kind
of
defeats.
The
purpose
of
k,
batches.
B
The
last
couple
of
steps
here
that
I've
got
in
this
lab
if
you're
interested
in
doing
it
on
your
own
talk
about
what
do
you
know
what's
or
how
do
you
know
what's
in
a
k
batch?
So
what
I'm
doing
here
is
an
rpm
query.
Looking
for
the
change
log,
that's
in
that
rpm
package.
So,
if
you
don't
know
red
hat
puts
in
to
every
rpm,
we
produce
a
change
log
of
what
changes
happened
and
why
we
made
that
updated
version
of
the
rpm.
B
So
if
we
run
this
on
k-patch,
what
we
see
here
right
here
is
we're
told
when
this
k-patch
rpm
was
produced
by
whom,
what
version
it
is
and
then
a
bulleted
list,
essentially
of
why
we
felt
the
need
to
produce
it,
and
one
of
the
things
referenced
here
is
that
cbe
number
that
goes
along
with.
D
A
And
this
is
helpful
for
those
sis
admins
that
are
like.
Yes,
we
have
patched
against
this
vulnerability
because
we
applied
this
k
patch
blah
blah
blah
done
right
like
if
you
want
to
definitively
say
hey.
Are
we
patched
against
cve
xyz?
You
could
then
just
go
back
and
be
like.
Yes,
we
are
with
this
version
of
this
patch
right
like
exactly
you
could
get
down
to
that
spec
level.
Yeah.
B
B
Exactly
right,
and
so
the
audit,
the
audit
report
that
comes
out
the
other
side
of
that
goes
to
somebody
in
the
organization
and
they're.
Like
oh
you're
running
this
version,
it's
out
of
date,
you
got
to
do
the
updated
version.
You're
like
well.
Wait!
That's
app
right,
we're
good!
Let
me
show
you
why
we're
good.
A
B
A
B
B
B
It
says
you
have
to
remediate
remediate
issues
within
30
days
of
vendor
release
of.
B
A
A
B
So
the
other
thing
to
point
out
is
every
cve
that
we
track
has
a
page
yeah
on
the
red
app
portal.
So.
A
B
Yeah-
and
I
think
we'll
probably
be
talking
about
this
a
little
bit
in
a
december
show
so
I've
already
lined
up
a
guest
to
talk
a
little
bit
about
product
security
in
december
and
I'm
fairly
sure
that
we'll
talk
a
little
bit
about
this.
But
you
know
so
even
you
can
look
at
the
change
log
and
see
the
cv
number
and
then
you
can
cross-reference
that
on
the
customer
portal-
and
I
won't
get
into
a
ton
of
details
on
that
beyond
that
that
statement.
A
B
The
other
technology
I
wanted
to
show
was
another
one.
We
talked
about
with
bob,
which
is
stratus
so
down
towards
the
bottom
of
the
lab.redhat.com
page.
There's
this
stratus
demonstration
or
lab
that
you
can
try
out.
B
B
D
B
All
right-
and
there
were
a
couple,
different
versions
of
stratus
floating
around.
We
are
now
shipping
version
two,
so
you
can
do
the
version
and
see
that
it's
version
two
and
operation,
one
if
that's
important,
stratus
daemon
is
going
to
need
to
be
started
so
that
we
can
interact
with
it
with
that
stratus
coming
line
interface,
so
we'll
just
go
ahead
and
start
it
up
and
if
you're
not
sure
whether
it
started
or
not,
you
can
do
a
status
to
see
that
it's
now
active
and
running.
B
B
All
right,
so
I
mentioned
that
we're
going
to
create
a
couple
of
black,
I
o
devices,
so
just
for
curiosity
stake.
We
can
look
at
what
block
devices
already
exist
on
the
box,
so
we
can
see
that
there's
a
disk
drive,
it's
got
a
couple
of
partitions
on
it
and
then
the
second
partition,
dev
vda2,
has
two
logical
volumes
built
on
top
of
it.
So
it's
the
physical
volume
of
this
volume
group.
B
All
right,
but
we
need
to
make
a
couple
of
additional
block
devices
to
use
for
this
lab,
so
we're
going
to
create
a
10
gig
file
and
then
we're
going
to
make
it
a
block
device
with
a
low
setup
right.
We
look
at
the
block
devices.
We
can
now
see
that
it's
right
there
and
then
it's
10
gigs,
so
we're
going
to
make
one
more
of
these
whoops
to
use
as
well.
B
All
right
so
now
I've
got
two
there.
So
that
means,
if
I
tie
these
two
together,
I
should
have
20
gigs
of
disk
space
available
right
and,
if
I
put
stuff
into
them,
they're
really
being
stored
in
these
files
that
we
created
in
in
the
temp
directory.
B
B
My
pool
is
not
a
particularly
good
name,
but
you
would
name
it
as
something
that
identifies.
Maybe
what
you're,
storing
there
or
yeah?
That's,
that's,
probably
the
name.
I
would
choose
what
I'm
storing
there
yeah,
but
we're
going
to
use
this
name
later
on
in
other
commands,
and
so
that's
why
it's
important
to
know
where
it
is.
B
All
right
and
if
we
look
at
block
id,
which
shows
you
all
the
block
devices
that
are
there,
we
now
have
this
new
one
fancy
yeah.
So
we've
got
this
pool
identifier,
we've
got
the
type
is
stratus,
because
it's
now
managed
by
the
strategy.
B
All
right
now,
stratus
has
its
own
native
commands
for
looking
at
this
stuff.
So
here's
my
pool
that
I
created
and
it's
10
gigs,
because
that's
the
amount
of
space
that
was
backing
it
and
so
forth,
and
so
on
so
pretty
pretty
straightforward,
storagey
stuff
right.
It's
like,
if
you
do
an
f
desk,
owl
you'll,
get
similar
stuff.
You
do
lvs
or
vg's
you'll
get
some
similar
stuff.
B
You
could
yeah,
so
I
I
have
not
tried
these
the
stratus
commands
for
doing
that.
I
know
with
lvm.
You
need
to
do
some
things
like
migrate
to
get
it
to
push
all
the
data.
A
Well,
yeah,
like
I
would,
if
I
were
setting
up
like
a
new
like
storage
appliance
of
some
sort
right
like
big
storage
box,
whatever
I
would
do
that
on
the
back
end
and
just
be
like
okay.
Add
this
thing
to
the
storage
pool
right,
but
like
some
kind
of
like
disabled
or
you
know
non-writable
way,
and
then
all
right
is
it
all
synced
up
and
then
all
right
kick
the
old
thing
off
done
right,
yep
exactly
now
upgraded!
My
back
end
storage!
B
Indeed,
and
when
you're
doing
things
like
the
migrations,
like
you
do
end
up
with
a
bunch
of
black,
I
o
that
happens
on
the
system
and
if
you're
running
a
block,
I
o
intensive
application.
You
know
there
there
can
be
some
hinkiness
there
that
you
want
to
try
to
avoid.
So
if
I
were
going
to
do
that,
I'd
probably
take
a
maintenance
window
yeah
so
that
I
could
make
those
changes
without
affecting
the
other
stuff.
That's
happening
on
the
box.
A
B
Oh,
chris
short,
all
right,
so
getting
back
on
track,
so
we
can
see
that
now
that
I
added
that
second
tagging
device,
then
the
pool
says
that
it's
got
more
disk
space
right,
that's
not
not
horribly
shocking
and
then
the
stratus
block
device
list
will
also
show
you.
The
the
component
devices.
A
B
So
you
can
choose
achieve
similar
things
with
lvm.
In
fact,
you're
you're
doing
the
same
thing
as
lvm
at
this
point,
where
you're
taking
multiple
component
block
devices,
you're
kind
of.
B
Together
into
a
single
addressable
entity,
right
all
right,
so
we're
going
to
create
a
file
system
on
it,
and
this
is
where
we
start
to
get
a
little
bit
more
interesting.
B
A
B
B
B
Right
and
if
you
exhaust
all
the
stores
in
the
pool,
you
will
have
to
do
some
of
that
right,
like
you'll,
have
to
add
another
device
and
then
you'll
have
to
add
it
to
the
pool.
But
as
soon
as
you
add
to
the
pool
now,
it's
addressable
to
space
for
file
systems
in
the
pool
right.
So
we've
kind
of
removed
that
logical
volume,
layer.
A
A
B
Yeah
now
you
know
these.
The
hardcore
system
administrator
is
probably
fringing
a
little
bit
because
they
work
with
folks
that
are
like
oh
I'll,
just
store
as
much
stuff
as
I
want
forever.
A
B
Right
now,
thinking
that
one
day
that
pool
is
going
to
be
exhausted,
yeah,
and
so
you
know,
as
a
systems
administrator,
you
still
need
to
be
aware
of,
like
how
your
pool's
doing
I
mean
you
can
see
that
there's,
there's
commands
right
to
kind
of
show
you
what's
up.
Let
me
get
back
to
doing
your
real
demo
here.
Yeah,
I'm
gonna,
I'm
gonna
give
up
on
the.
A
B
B
A
B
Enough,
the
I'll
just
kind
of
leave
this
one
out
there
and
if
somebody
wants
to
continue
on
the
demo
and
like
mount
the
file
system
and
do
snapshots
and
stuff
which
is
all
really
lvm
activities
on
the
covers
you
can,
you
can
try
that
out.
But,
oh
actually,
it
is
here
in
the
pool
list.
B
So
we
were
talking
about
how
you
need
to
be
aware
of
like
what
what
storage
is
being
used
and
how
it's
allocated
and
you
can
see
in
the
straps
pool
list
you're,
given
you
know,
here's
the
overall
size
and
then
here's
the
actual
allocation
out
of
it,
and
so
you
can
still
monitor
it
and
manage
it
similar
to
how
you
would
with.
A
A
When
redundant
storage
does
actually
become
cheap,
I
envision
a
day
where
stratus
is
stable
enough,
where
it's
like
literally
just
auto,
detecting
or
I
run
some
process
that
says
hey
check
and
see.
If
I
need
to
add
disk
to
anything
and
like
I
just
do
it
automatically
like
an
asphalt
playbook
that
just
runs
every
night.
It's
like
all
right
give
this
give
this
device,
you
know
or
a
pool,
10
more
gigs
and
give
this
pool.
You
know
10
less
gigs
kind
of
thing
right,
like
all.
D
B
Yeah-
and
we
are
closing
in
on
that
with
things.
B
Rel
storage
system
role,
where
you
can
add
file
systems
and
space
to
logical
volumes
and
stuff
across
the
population
that
the
other
interesting
thing
is
like.
So
you
run
a
playbook
on
one
box
that
really
helpful.
B
B
So,
and-
and
bob
was
talking
about
how
this
allows
them
to
make
some
decisions
behind
the
scenes
on
like
well,
maybe
we
should
be
doing
lvm
stuff
or
maybe
we
should
be
doing
vdo
stuff
right.
We
also
have
a
lab
on
vdo,
which
I've
elected
in
that
show,
but
essentially
it's
like
doing
duplication
like
deduplication
of.
B
A
B
A
A
A
Double
scares
me:
that's
just
the
old
ops
guy
in
me.
You
know
right,
like
I
remember
when
software
raid
was
new
and
very
flaky
temperamental,
maybe
might
be
the
better
word,
and
I
also
remember
when
I
scuzzy
is:
oh,
yes,
it's
still
a
pain.
A
Well,
yes,
and
that's
why,
like
raid
10,
is
pretty
much
becoming
like
the
ubiquitous
thing
I
feel
like
everywhere,
because
when
even
when
I
stood
up
my
physical
server
here
in
the
house,
I
was
like
yeah
raid
10
makes
the
most
sense
for
me.
B
A
B
Well,
they
don't
understand
that
with
raid
5,
you
know
you
have
a
disc
failure,
which
means
that
your
whole
array
of
discs
that
were
probably
all
purchased
at
the
same
time
have
all
hit
that
page
yeah
right
and
so
now
you
have
this
old
array.
You
had
one
failure,
you
know
not
not
a
big
deal.
You
just
run
into
grade
mode,
that's
cool
until
you
slap
a
new
drive
in
there
and
then
every
single
drive
in
that
array
has
this
super.
I
o
intensive
activity
that
happens
they're
already
at
the
end
of
their
lifespan.
A
You
might
this
is
why
this
is
why
my
server
has
eight
discs.
Only
six
some
are
in
use
three
watt
from
one
place
at
one
point.
Three
were
bought
from
another
place
at
another
point,
and
the
hot
spare
was
bought
from
another
place
at
another
point
in
time.
Right
like
this
is
this
goes
down
to
the
physics
of
storage,
which
is
what
I
don't
usually
like
about
storage,
so
yeah,
that's
just.
A
B
They
it
has
a
lot
of
the
same
features
and
commands
and
organizational
details.
So
yeah
and
and
as
bob
mentioned,
butt
rfs
is
shipping
as
the
default
for
fedora
33..
While
it's
not
in
rel.