►
From YouTube: 2020-05-28 :: Ceph Tech Talk - What's New In Octopus
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hi
everyone
and
welcome
to
our
Seth
tech
talk
for
May
28.
We
have
these
live
streams
every
fourth
Thursday
of
the
month,
1700
UTC.
Today
we
will
be
hearing
from
staff
engineers,
Josh
Durgan
from
Red
Hat
and
some
general
improvements
and
features
of
this
F
octave
release
and
we'll
also
be
hearing
from
Lance
Kramer
from
Sousa
on
the
new
enhancements
of
features
in
the
latest
version.
So
go
ahead
and
take
it
away.
B
B
Bristleback
porting
to
release
is
back.
So
that
means
we're
still
fixing
bugs
in
mimic
and
Nautilus
sprint,
and
you
can
upgrade
up
to
two
releases
at
a
time
as
well,
so
you
can
go
directly
from
luminous,
Nautilus
or
from
mimic
octopus,
but
in
order
to
go
from
luminous
to
octopus,
you
have
to
stop
and
mimic
her
Nautilus.
First.
B
So
what
so,
as
we
look
at,
what's
new
activist,
we
can
categorize
things
into
five
overall
themes:
mister
off
with
the
usability.
This
is
a
very
important
one
for
folks.
I
get
especially
hooks
new
to
set
for
storage
from
general,
and
this
is
where
one
of
the
largest
improvements
this
happens,
this
past
release,
so
with
the
octopus
octopus
release
their
the
orchestrator
api
and
that's
a
tough
ATM
implementation
of
it.
It's
now
fully
functional
and
ready
to
use.
B
This
idea
behind
us
is
to
kind
of
unify
the
ways
to
deploy
and
manage
stuff,
so
that
lots
of
the
logic
is
centralized
in
one
place
within
the
orchestrator
manager
module
and
that
users
of
workhorse
fpdm
don't
have
to
worry
about.
What's
going
on
under
the
covers,
Ilia
take
care
of
things
through
this
symbol,
it's
at
the
same
CLI
or
the
same
api.
B
B
One
of
the
big
shifts
here
is
also
I
moved
to
a
container
based
deployment,
so
SEF
ATM
picks
like
a
container
image
and
deploys
a
container
daemon
and
does
so
in
a
way
more
similar
to
other
configuration
frameworks.
Where
you
declare
what
you
want
to
run.
For
example,
say
you
want
to
run
three
monitors.
You
can
tell
it
to
use
these
hosts
to
play
a
few
monitors.
Three
managers
beyond
yeses
and
awesome
number
twisties.
B
As
fairly
minimal-
and
it
sees
as
well
so
it's
very
easy
to
get
started
with
just
run,
but
booth
check,
commands
to
create
a
new
cluster
and
then
add
more
demons
from
there.
This
relies
on
system
teens
to
manage
the
cluster
each
determining
demons.
They
look
at
the
read
time
that
was
gonna,
be
five
minute
doctor
and.
B
B
Ahead
of
it,
okay,
in
addition
to
just
simply
running
the
demons,
it
also
command,
is
not
an
ongoing
basis.
So
there's
an
Orchestrator
like
PS
command.
B
C
Thanks
Josh,
that's
a
good
cue.
Coming
from
the
safe
EDM
part,
the
thing
that
I'm
pretty
excited
about
it
is
that,
basically,
you
can
bootstrap
yourself
class,
the
starting
of
a
very
minimal
environment
just
consisting
of
one
man
and
a
manager
demon,
and
within
that
many
diamond.
You
have
theft,
dashboard
up
and
running
already,
and
our
plan
is
at
some
point
that
you
will
basically
be
able
to
deploy
the
whole
range
of
services
that
consists
that
the
safe
cluster
consists
of
from
within
the
dashboard.
We
haven't
gotten
there
fully.
C
Yet
in
octopus,
you
are
able
to
add
house.
So
if
you
have
one
figured
the
basic
ssh
set
up
to
access
this
new
house-
and
you
can
add
them
through
safe
dashboard
to
make
them
manager
both
rousseff,
a
DM
and
probably
the
biggest
part
of
the
job
of
of
deploying
a
safe
clusters
rolling
out
the
OSDs,
creating
the
aforementioned
drive,
or
always
these
bag,
as
it's
now
called,
and
that
basically
is
a
way
to
describe
in
a
kind
of
a
pattern.
C
Matching
scheme
which
discs
of
your
cluster
should
be
used
across
all
of
the
nodes
and
surf
dashboard,
also
contains
a
functionality
to
automate
that
I'm
going
to
talk
about
this
in
a
following
slide
and
first,
I
would
like
to
start
with
some
of
the
more
the
user,
visible,
noteworthy
changes
to
dashboard
itself.
The
first
thing
you'll
notice,
when
logging
instead
layout,
has
changed
significantly.
C
Previous
versions
of
the
dashboard
also
contain
two
different
widgets
that
were
showing
information
about
notifications
and
another
one
that
shows
progress
information.
We
have
now
unified
this
in
a
new
way.
You
can
see
a
screenshot
of
this
coming
up
and
it
also
shows
the
information
of
all
progress
details.
That's
of
everything
that's
going
on
in
the
background,
so
every
chef
component
that
utilizes
the
progress
manager
module
can
basically
send
progress
information.
So
it's
visualized
in
the
dashboard
safe
adn
is
not
there
yet.
C
Unfortunately,
that's
something
that
is
being
worked
on
in
this
plant
tour
right
now.
The
OSD
deployment
process
is
a
bit
hard
to
follow
on
the
dashboard
at
this
point,
but
many
other
activities
that
are
going
on
are
were
very
easy
to
spot.
Another
UI
enhancement
that
we
made
is
that
there
are
several
tables
in
which
you
can
now
select
multiple
rows
to
perform
the
same
action
on
those
elements.
This
makes
sense.
C
C
C
That
shot
here
shows
the
new
tasks
and
notifications
bar
to
the
right.
You
can
open
it
by
clicking
on
the
bell
icon
in
the
top
right
and
on
the
top.
You
see
all
currently
ongoing
background
activities
and
notifications
below.
If
you
have
missed
the
pop-up
that
usually
shows
up
when
a
task
has
been
completed.
You
can
see
the
whole
history
here
and
then
can
either
clear
them
all
at
once
or
delete
them
individually.
If
you
want
you
sty
place,
the
dashboard
itself
has
also
gained
a
number
of
features
primarily
around
the
user
management.
C
You
can
now
also
change
your
own
password
without
having
to
ask
an
administrator
simple
feature,
but
it
was
missing
so
far
and
we
added
a
number
of
additional
password
policies
that
you
can
enable
if
you
want
to,
they
are
disabled
by
default,
but,
let's
say
you're
deploying
safe
in
an
environment
that
makes
certain
expectations
or
requirements
or
new
password
security
like
amount
of
characters
and
number
of
special
characters
that
need
to
be
used
in
various
other
things.
This
can
now
be
enabled
and
configured
if
necessary.
C
You
can
also
let
passwords
expire
and
ask
the
users
to
update
the
passwords
in
certain
periods,
and
if
you
are
working
with
roles,
it's
now
very
easy
to
clone
an
existing
role.
If
you
just
want
to
create
a
role
with
a
slight
deviation
to
one
of
the
existing
roles,
it's
a
bit
easier
than
having
to
construct
a
rule
from
scratch.
C
Okay,
let's
move
on
to
the
next
slide,
please
right!
So
a
lot
of
focus
and
work
in
in
the
octopus
dashboard
went
into
the
day
to
operations
and
particularly
or
was
de
management
or
especially
since
the
OS.
These
are
safes
workhorses
and
usually
require
the
most
maintenance.
Once
the
clusters
up
and
running,
we
try
to
add
a
number
of
new
features
here
like,
for
example,
the
ability
to
actually
see
which
this
are
associated
to
over
those
d.
C
You
can,
if
this
enclosure
supports
it-
and
you
know
your
lip
storage
management,
you
could
know
blink
the
hard
disk
enclosure
to
identify
disk
somewhere
in
iraq.
There's
a
lot
of
integration
with
the
orchestrator.
You
can
see
an
inventory
of
all
the
disk
drives
across
all
nodes,
for
example,
and,
as
I
mentioned
already,
the
ability
to
also
deploy
new
OS
DS
on
free
disks
that
are
still
justice.
C
C
Yeah
yep
more
information
about
disks
health
and
we
display
smart
data.
The
health
prediction
module
has
been
enabled,
so
you
can
see
a
prediction
of
the
lifetime
of
your
disk.
We
allow
adding
changing
device
class,
so
in
addition
to
SSD
and
then
spinning
this,
you
can
also
create
arbitrary
device
classes
that
you
can
then
use
for
creating
safe
pools
and
applying
crash
rules
to
them
right
next,
I,
please.
C
These
are
just
some
screenshots
of
the
feature
that
I've
mentioned
before
the
way
how
you
change
your
password
upon
login,
you
can
see
how
you
are
getting
getting
reminded
when
your
password
is
expiring
and
the
top
left
image
shows
the
user
creation
dialog
the
there's
also
no
one
option
to
temporarily
or
permanently
disabled
user
without
deleting
their
accounts.
So
that's
maybe
useful.
If
you
have
people
being
on
vacation
and
they
shouldn't
allow
to
login,
then
you
can
just
disable
that
account
for
the
time
being.
C
Overall,
safe
box
still,
of
course,
supports
integrating
into
other
authentication
systems
using
the
sam'l
protocol.
This
hasn't
changed.
It's
still
also
available,
if
necessary
by
please
there
are
more
feech.
We
added
a
number
of
new
functionality
to
how
you
manage
pools.
Pool
quotas,
for
example,
are
noteworthy
here
or
the
autoscaler
that
is
enabled
you
can
now
on
a
pool
level,
decide
if
chef
should
be
auto-scaling
that
pool
or
if
it
should
just
give.
You
hints
if
well,
the
number
of
PGs
isn't
optimal
or
you
can
disable
auto
scaling
altogether.
C
That's
probably
helpful
if
you
want
to
make
sure
that
one
some
pools
stay
at
their
defined
PG
size,
while
others
may
be
more
dynamic
in
changing
their
sizes.
Brush
placement
rules
are
already
mentioned.
That's
the
part
about
device
classes
when
it
comes
to
the
object,
gateway,
we
know
edits
or
for
enabling
bucket
versioning
multi-factor
authentication
not
choose
the
placement
target
when
you
create
buckets
over
the
via
the
dashboard
next
slide.
C
The
ffs
has
gotten
some
love
as
well.
You
can
know
in
the
list
of
active
service
clients,
also
evicted,
meaning
you
disconnect
them
from
this
ffs.
You
can
manually
create
snapshots
using
the
snapshot.
Functionality.
That's
built
in
to
self-assess
quota
management
has
been
added
and
we
include
a
simple
directory
file
system
browser
that
you
can
use
to
traverse
the
structure
of
a
set
of
s
file
system
on
the
I
scuzzy
front.
You
see
the
state
of
the
gateways
on
the
landing
page
now,
so
you
get
a
better
overview
which
gateways
are
currently
up
and
running.
C
Monitoring
is
something
that
has
been
part
of
the
dashboard
for
quite
some
time.
We
do
embed
graph
on
our
dashboards
in
very
many
places
and
we're
also
using
promethease
alert
manager
to
indicate
alerts
that
Prometheus's,
aware
of
and
their
alert
management
has
also
been
improved.
So
you
can
now
not
just
see
they're
currently
firing
alerts,
but
basically
all
configured
alerts
that
promethease
alright
moving
on
I
think.
That's
it
in
a
nutshell,
heading
over
to
Josh
again,
alright,
thanks.
B
Lance,
that's
what
there
are
also
a
few
other
improvements
to
usability
at
the
latest
level.
We
try
to
improve
number
of
the
default
behavior
of
stuff,
for
example,
with
the
PPI
autoscaler
module
on
by
default
means.
You
don't
need
to
worry
about
number
of
PG,
see
you
when
you're
creating
pools.
It
will
automatically
scale
up
the
number
P
keys
as
the
cluster.
It
fills
up
with
theta
and
for
such
FS
90w,
which
have
more
requirements
for
I/o
for
their
metadata
pools.
B
So
we've
added
the
ability
to
mute
health
alerts,
health
warnings
or
errors,
either
with
a
time
to
live,
so
you
can
have
a
temporary
mute,
for
example,
if
you're
sick
I
think
you'd
remain
inside
some
OSD,
maybe
you
don't
want
to
have
the
cluster
go
into
health
warn,
while
that
was
teased
down
for
an
hour,
these
will
automatically
unmute
themselves
when
the
alerts
change
or
that
they
increase
in
severity.
For
example,
if
you
go
from
earful
to
fairly
fool.
B
Smuggling
cleanup
of
the
various
commands,
including
unifying
the
Softail
and
SEF,
teaming
interface.
So
previously
you,
you
had
a
number
of
command,
so
you
had
to
be
an
honest
to
the
coast
to
run
the
same
host.
What
the
even
was
running
these
were
accessed
through
that
it's
ft/min,
your
admin
socket
interface.
Now
these
are
also
available
over
the
Softail
interface,
which
is
a
cluster
wide.
So
you
can
run
run
these
on
any
host
and
talk
to
any
demon.
B
They
don't
reflect
internals
improvements
within
readers
to
improve
the
robustness
and
performance
of
the
cluster.
One
is
partial
objects,
recovery,
which
means
that
you
only
recover
a
change.
The
change
portion
of
an
object,
for
example,
for
an
RVD
workload
with
lots
of
small
rights
instead
of
recent,
getting
an
entire
four
megabyte
objects.
B
B
The
respective
quoi
and
improving
our
knowledge
and
what's
happening
I
would
among
our
users
so
with
lots
of
us,
we
have
built
in
telemetry
and
crash
reporting.
These
are
opt-in,
I
need
so
either
a
stir,
but,
as
to
suppose
leaves
consents
to
the
expiry,
no
data
with
this
F
project,
and
if
they
do,
then
we
now
we
have
some
information
about
cluster
size.
B
There
is
a
public
dashboard
available
now,
actually
at
telemetry
public,
that's
F
comm,
you
can
see
information
about
which
rich
versions
are
being
used
and
how
many
and
advices
are
being
deployed
know
how
many
OS
these
burn
I
propose
that
kind
of
thing
and
we
hope
to
expand
this
further
in
the
future
through
collecting
some
more
of
these
metrics
read
to
myself
and
and
failures
to
improve
our
device
failure
prediction
model.
So
let's
write
a
prediction:
I
wanted
device
will
fail
before
it
does
we
highly
encourage
all
users
to
opt
into
the
safe?
B
There
is
a
improved
prefetching
and
there
compaction
and
robach
CB
and
generally
a
particular
boxy
version
has
an
other
bug,
fixes
included
as
better
use
of
memory
within
the
Boosters
cache
and
better
trimming
behavior
wearing
a
trim
saw
as
it
as
it
goes.
Instead
of
a
peer
like
schedule,
which
is
it's
like
significant
performance
improvement,
it's
also
tracking
of
OMAP
utilization
for
on
a
purple
basis.
B
B
B
This
really
we
reduce
the
minimum
allocation
size
for
the
story
to
four
kilobytes
which
matches
the
sucker
size
on
these
devices
and
significantly
reduces
the
overhead
for
small
objects,
especially
in
a
race
record
environment,
previously
four
flashes
for
16
K.
So
if
you're,
restoring
I'd,
say
a
4k
object,
you'd
have
an
overhead
of
four
times.
Let's
say
every
four
hydrogen
take
up
at
least
at
16k
minimum
that
saves
time
tons
of
space
and
also
improves
performance.
A
bit.
B
In
rtw,
a
lot
of
work
going
on
to
refactor
things
to
be
more
a
synchronous
that
started
with
the
peace
front
end
to
replace
the
bed
web,
which
is
asynchronous
itself
ends.
Slow,
is
moving
forward
with
a
boost.
Seo
request
processing
will
take
the
asynchronous
request
from
the
Beast
front
end
and
send
them
all
the
way
down
through
onto
the
wire.
B
There's
also
a
lot
of
efforts
and
grounds
wording
using
the
OMAP
structure,
which
is
essentially
stored
directly
in
rocks
TV
when
it's
not
necessary,
since
it
has
some
some
overheads
which
are
necessary
for
a
number
of
use.
Cases,
though
there
are
two
W
is
now
using
simple
B
folk
used
for
garbage
collection
and
attempting
to
also
add
that,
at
that
support
for
some
bits,
multi-site
logging
capabilities
before
eat
over
whelming
the
rocks,
TB
and
becoming
bottleneck
on
CPU
RB
d.
There's.
B
B
It's
that
multi-site.
This
has
been
a
theme
across
various
use
cases
and
stuff
for
years
now,
but
there's
still
many
improvements
they're
coming
out,
so
the
first
is
an
RVD,
so
everybody
mirroring
that
part.
Octopus
had
provided
two
freighted
export
in
time
crash
consistent
views
of
images
that
you
could
use
for
disaster
recovery.
B
The
downside
to
this
is
that
it's
very
high
overhead
in
terms
of
I/o,
you
have
to
essentially
journal
all
the
changes
are
all
the
rights
that
I'm
going
to
our
video
images
into
a
journal
stream.
That's
replayed
on
the
on
the
other
news
sites,
so
it
has
a
new
version
of
this
octopus,
which
is
based
on
snapshots
rather
than
I
I'm
going
journal,
which
requires
much
less
IO
overhead,
and
it
will
also
work
with
kernel
every
D,
whereas
the
reading
mirroring
today,
which
requires
user
space,
are
be
clients.
B
B
Garage
W
multi-site
support
has
been
there
for
a
long
time
as
well.
If
you
have,
you
already
have
the
ability
to
federated
multiple
sites,
you
have
a
add
a
global
bucket
and
user
name
space,
and
you
can
choose
to
replicate
data
asynchronously
between
zones
at
a
site
or
zone
linearity
octopus.
We
also
add
the
bucket
granularity,
so
you
can
choose
which
buckets
you
want
to
replicate
and
which
ones
you
don't
to
different
sites.
B
Also
a
number
of
improvements
and
work
for
around
the
app
to
quadricycle,
including
being
able
to
use
CSI
very
easily,
be
able
to
run
monitors
no
STS
on
top
of
other
resistant
volumes,
which
was
mainly
useful
if
you're
running
or
ends
up
in
that
public
cloud
environment.
B
B
B
C
Maybe
if
I
can
chime
in
there
is
no
an
option
to
replace
in
OST
in
the
sense
that
I'm,
safe
ADM
we'll
remove
the
OST
but
preserve
the
OSD
ID.
So
that's,
if
does
it
start
shuffling
data
around
and
then
you
can
create
recreated
or
Steve
the
same
ID,
pointing
it
to
a
new
Drive.
That's
something
that
I
think
will
also
be
back
ported
into
octopus.