►
From YouTube: Ceph Orchestrator 2021-07-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
you,
if
you
potman,
put
the
container
into
the
etsy
host
in
order
to
make
it
resolvable
as
a
dns
name
and
the
socket
module
of
python,
has
a
function
get
q
to
n
and
that
if
it
does
not
find
the
host
name
with
an
f
q
to
n
it
scans
the
see
host
file
for
file
for
aliases
that
contains
dots,
which
means
that
if
we
are
combining
both,
we
have
the
get
fqn
function,
returning
the
container
name,
if
it
contains
dots
as
an
fqdn,
now
super
broken
and
I'm
either.
A
A
And
the
second
issue
is
that
the
c
groups
are
not
shared
between
the
container
and
the
system,
the
unit
that
got
fixed
by
someone
contributing
it,
but
that
that's
already
in
pacific,
as
far
as
I
know
so
that
really
doesn't
affect
downstream,
except
for
the
tcmu.
A
You
run
a
container
that
that
does
that
escapes
the
c
group,
because
we
are
running
it
in
the
background.
A
B
Talk
about
also.
A
C
Yeah
from
a
couple
weeks
ago,
I
was
curious
to
where
we
are
with.
I
saw
your
email
was
it
yesterday
about
you
know,
can
you
use
it
upstream?
Any
concerns
with
that.
I
didn't
see
a
reply
to
that.
Just
wondering
if.
D
C
A
Yeah,
it's
still
a
concern
right
if
lso
does
not
have
a
way
to
install
lso
in
a
vanilla,
kubernetes
environment.
We
I
from
I
I
for
one.
Have
I
don't
know
I
I
don't
know
if
I
really
want
to
make
it
mandatory
for
the
rook
manager
module
if
upstream,
when
the
kubernetes
installations
are
not
fully
supported
by
the
lso
yeah.
B
I
mean
so
just
to
be
clear:
we
aren't
talking
about
making
it
mandatory,
we're
talking
about
consuming
an
arbitrary
storage
class
if
it
is
lso
or
some
other
storage
class
type
that
we
do
understand,
then
we'll
make
use
of
whatever
other
information
we
can
extract
from
it,
and
so
we'll
quote
unquote
support
lso
in
that
sense,
but
we
definitely
don't
want
to
tie
ourselves
to
a
particular
specific
provisioner,
because
not
everyone
will
use
the
same
one
necessarily.
B
That
said
there
is,
we
probably
should
form
an
opinion
about
what
we
want
to
recommend
and
there
is
no
upstream.
Let's
install
though
so
then
that's
probably
not
the
one
that
we
want
to
recommend
that
people
use
right.
B
C
Upstream
yeah,
what
I
know
lsateam
has
said
in
the
past
that
it
should
install
upstream
just
fine.
It
doesn't
require
openshift,
but
the
readme
isn't
real
friendly
about
that
or
it's
not
obvious
at
least
just
glancing
at
the
read
me.
C
So
I
just
need
to
clarify.
I
guess
what
they're,
but
they
won't
have
any
concerns
supporting
it
upstream
and
support
upstream
is
always
loose
right,
but
just
making
sure
that
it's
not
an
issue
there,
but
I'm
sure
it
like
works
on
standard
kubernetes,
though
yeah
we
just
need
to
confirm
with
them.
If
there's
any
concerns
here,.
C
B
Let
me
just
chime
in
here
I
mean
I
think,
like
setting
aside
like
what
all
the
politics
of
the
different
projects.
I
think
in
our
dream
scenario
that
the
local,
the
local
storage
operator
that
we'd
like
to
have
is
one
that
would
give
us
the
ability
to
inspect
the
hardware
inventory.
So
we
can
see
what
the
directly
attached
devices
are.
B
It
would
allow
us
to
provision
a
raw
device
as
a
pv
like
without
any
partitions
or
lvs.
It
would
allow
us
to
get
and
also
would
allow
us
to
dynamically
carve
out
pieces
of
pv.
So
I
want
10
gigabytes
here
and
I
want
30
gigs
there,
either
as
lvs
or
his
partitions
like.
If
we
could
do
all
those
things
that
would
be,
I
think,
like
a
fully
complete
local
storage
operator.
None
of
them
right
now
do
that,
like
lso
only
lets
you
do
a
raw
device
as
a
pv,
but
it
doesn't
do
dynamic
provisioning.
B
It
does
like
inspect
the
hardware
inventory,
but
again
it's
not
dynamic.
So
it's
kind
of
frustrating
total
lvm
does
all
the
lvms,
but
it
doesn't
let
you
inspect
inventory
and
it
doesn't.
Let
you
get
raw
devices
and
open
ebs.
Local,
let's
see,
inspect
inventory.
B
I
forget,
but
none
of
them,
none
of
them
do
all
those
things
so
it'd
be
nice
to
have
one
operator
that
did
all
of
them
that
we
could
maintain
and
recommend-
and
ultimately,
I
think,
included
in
the
product,
because
I
think
one
of
the
one
of
the
one
of
the
gaps
with
rook
ceph
right
now
is
that
everything
is
always
redundant
like
you,
you
can
only
provision
reliable
storage
and
I
think
there
are
lots
of
users
who
also
want
non-reliable
storage,
because
the
application
they're
deploying
either
just
needs
scratch
space
and
it
doesn't
need
any
that
kind
of
durability
or
because
it
already
handles
durability
at
another
layer,
because
it's
mongodb
or
spark
or
some
other
thing
that
has
some
other
existing
distributed,
replication
framework
that
he
uses,
and
so
I
think,
it'd
be
a
nice
complement
to
rook
seth.
B
If
there
were
a
rook
local
thing
that
you
deploy
along
with
it,
that
does
all
the
like
non-reliable
stuff,
both
for
upstream
and
for
dancing.
B
So
I
kind
of
wonder
if
we
should
just
like
basically
scope
out
what
we
want
and
then
look
at
the
existing
projects
and
what
they
do
and
see
whether
it
makes
sense
to
like
try
to
adopt
one
and
extend
it
whatever
the
missing
pieces
are,
or
just
even
just
build
something
like
fill
that
gap
right.
I
think
I
think
yeah.
B
B
C
B
Yep-
and
I
mean
it-
I
think
the
putting
it
under
the
rook
umbrella
is
an
interesting.
I
guess
like
community
political,
whatever
conversation,
because
if
memory
serves
like
the
the
rook
brand
or
whatever
was
repositioned
several
years
ago
to
like
not
just
be
an
operator
for
seth,
but
to
be
an
umbrella
for
lots
of
operators
and
that
was
sort
of
the
basis
under
which
it
was
accepted
into
uncf.
But
then
a
lot
of
a
lot
of
those
operators
sort
of
disappeared.
B
So
exactly.
But
I
think.
B
If
it
had
a
local
one,
then
it
would
be
a
like
it
feels
like
it
would
be
a
future
complete
operator
like
it
has
all
of
the
storage
things
that
you
want,
and
even
in
the
cases
where
you
like
to
play
on
top
of
ebs,
I
think
it'd
be
pretty
valuable
because
you
could,
for
example,
deploy
the
rook
local
one
by
provisioning,
a
big
ebs
volume
and
then
carving
it
up
with
lvm
and
all
these
little
smaller
pieces.
Yes,
you
can't
do
that
with
bbs.
B
C
C
B
C
C
Yeah,
just
having
upstream
community
around
it
too,
because
it
doesn't
exist
today
for
lso,
for
example,
and
also
it
just
it
would.
I
think
it
would
help
us
get
to
the
vision
of
recommend,
running
everything
in
rook
on
pvs
and
a
way
we
could
get
away
from
the
devices
if
we
have
something
more
integrated,
potentially
yeah.
C
B
Think
I'm
a
little
bit
nervous
about
lso
in
this
case,
just
because
of
I'm
not
sure
that
it.
I
think
it
would
be
pretty
significant
changes
in
order
to
like
take
all
these
boxes,
because
it's
or
maybe
not,
I
don't
know
it's
it.
The
way
that
it
pre-creates
all.
The
pvs,
for
example,
is
not
what
we
want
like.
B
I
think
it
needs
to
be
a
dynamically
provisioned
thing
or
it
has
an
inventory
and
then,
if
you
ask
for,
if
you
have
a
pvc
that
says,
I
want
a
complete
device,
then
it'll
give
a
whole
device,
and
if
it
wants
a
nlv,
then
it
would
go,
find
a
complete
device,
that's
and
that
lvm
stuff
and
then
generate
your
lv
and
so
on
or
partition
or
whatever.
It
is
right.
C
B
Like
I
wonder
if
it
makes
sense
to
talk
to
the
lso
team
about,
like
I
mean,
maybe
it
makes
sense
to
write
up
like
a
requirements.
Document
like
this
is
what
we
want
to
exist.
How
do
we
get
it
talk
to
the
lso
folks
and
see
if
that's
compelling
to
them
and
whether
they
want
to
like
be
the
ceo
of
that
and
or
also
talk
to
the
top
ovum
fellow
and
see
if
that
makes
sense?
B
If
that's
something,
because
I
think
my
sense
of
my
guess
is
that
that
there's
more
code
that
tv
reviews
on
the
topolvm
side,
but
I'm
not
I'm
not
really
sure
about
that
yeah,
because
I
think
it's
more
of
a
dynamic
provisioning
approach,
right
yeah,
exactly
more
like
adding
the
the
inventory
component
and
the
and
the
raw
device
capability
whatever
to
to
povium.
B
But
I
mean
this:
I've
long
had
this
like
this
sense.
That,
generally
speaking,
stuff
is
like
a
one-stop
shop
for
all
everything
you
need
for
storage.
You
get.
B
You
have
object,
you
block
your
file,
like
you,
have
everything
accept
unreliable
storage
and
it's
easy
to
say
that,
like
we
think
that
your
data
should
be
safe,
but
there
are
a
lot
of
valid
use
cases
where
you
need
storage
that
isn't
reliable
or
just
there's
no
reason
to
pay
for
that,
because
you
have
redundancy
at
another
layer
and
it's
it's
awkward
to
like
do
that
vsf
I
can
create
replica
one
pools
and
provision
rvs
on
top
of
them,
but
it's
just
like
it.
B
Doesn't
it's
awkward
and
you're
having
you
have
all
this
complexity
in
the
stack
that,
like
isn't
really
necessary
when
like
a
simple
v,
would
do
would
do
the
trick
so
exactly
it
feels
like
a
good
compliment.
Yep.
A
Replica
one
pools
are
also
not
that
well
tested
right.
It's
it's
just
an
edgy
case
yeah.
Maybe
it's.
C
B
It's
not
a
good
fit,
basically
yeah
great,
and
when
you
have,
I
think
when
you
have
these
like
two
and
and
it's
slow
right
like
if
you,
if
all
you
got
need,
is
one
replica,
you
don't
need
the
reliability,
but
you're
still
going
through
all
the
stuff
and
you're
paying
all
this
cost
overhead
costs,
but
you're
not
actually
getting
any
benefit
from
it.
I
think
the
one
thing
you
do
get
is
you
have
a
single
pool
of
storage,
so
you
can
sort
of
manage
it.
B
Similarly,
so
like
the
one
thing
that
I
guess,
maybe
the
ocs
operator
or
whatever
would
still
probably
want
to
do-
is
look
at
the
utilization
of
the
local
storage
operator
and
all
the
installation
of
ceph
itself
and
based
on
that,
like
dynamically
provisioned,
more
of
suffocs,
or
maybe
like
deep
revision,
osds
and
reclaim
some
of
those
raw
devices
as
local
storage,
or
vice
versa,
like
having
dealing
with
that
balance
of
how
much
is
non-redundant,
how
much
has
redundant
storage
b
and
something
else
to
there's
an
opportunity
there
for
ocs
to
do
something
clever.
C
C
C
B
Print
path
is
fine,
like
it's.
It's
it'll
have
the
generic
storage
class
and
pd
support
if
they're,
already
pre-created,
and
it
has
like
some
minimal
framework
in
there,
so
that
if
it's,
this
is
an
lso
storage.
Pv
it'll
query
its
custom
crd.
We
could
add
the
similar
support
for
open,
dbs
local.
If
we
wanted
to
just
to
like
show
that
it's
not
directly
tied.
B
Also,
I
think
that's
sufficient
for
now
and
then,
when
we
have
another
local
storage
operator
that
we
like
like
better,
then
we'll
add
the
sort
of
the
deep
integration
support.
B
It
means
that
until
that
happens,
though,
we
won't
be
able
to
support,
drive
groups
that,
like
tell
you
to
carve
out
the
ssd
into
multiple
pieces,
because
I
also
won't
do
that.
But
I
think
that's
fine,
I
think
just
having
the
basic
support
for
full,
complete
raw
devices
or
whatever
would
be
suspicion.
B
So
I
think
that's
that's
basically
why
current
pull
requests
it
does.
So
I
think
we
should
just
like
basically
move
forward
with
that
and
wants
to
sort
of
he
I
think
he's
just
are
you
there?
B
I
think
it's
got
the
request
that
does
the
the
provisioning
with
what
the
basic
it
does
listing
with
the
with
the
lso
support
in
there.
I
can't
remember
if
you
did
ocd
add
with
the
new
the
scheme
or
not.
D
I
yeah
I
have
a
branch
with
osd
creation,
but
I'm
re
like
refactoring
that
at
the
moment,
because
it
like
improper
just
as
a
side
note
for
for
the
default
like
let's
say
the
user
is
not
using
like
lso
or
anything.
It's
just
like
the
default
storage
class
in
the
tv,
basically
for
osd
creation,
there's
like
no
way
to
distinguish
devices
by
host,
so
it
can
basically
only
support
like
all
available
devices.
B
Well,
I
think,
if
I'm
not
mistaken,
I
think
the
way
to
approach
that
is
to
have
it.
Basically,
you
can
you,
can
the
pvc
can
claim
a
specific
device
right,
or
is
that
still
something?
It
also
has
to
add?
I
can't
remember.
B
B
D
B
C
B
B
Is
to
get
the
basic
support
in
manager
rook
with
using
either
the
pre-created,
pvs
or
lso.
I
think
we
can
get
all
the
basics
working,
including
drive
groups
that,
like
use
all
devices
or
a
drive
group,
that
has
some
other
filters.
If
we
do
the
filtering
in
the
manageric
side,
I
think
we
can
make
that
all
work.
C
And
also
we
being
able
to
leave
the
support
for
the
other
operators
say
if
you
want
to
use
lso
or
the
or
others
they
work,
just
not
as
well
as
the
one.
That's
fully.
B
A
Actually,
this
week
he's
going
to
be
on
vacation.
B
B
A
A
Perfect,
so
then
we
are
a
bit
early.
That's
awesome
have
a
great
week
and
see
you
next
week.
Bye
talk
to
you
later
bye,
see.