►
From YouTube: Ceph Orchestrator Meeting 2021-09-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
today's
orchestrator
meeting.
Let's
see
what
we
have.
A
A
We
have
big
topics
today
we
have
mostly
creation,
mirror
storage
classes,
we
have
aj
proxy
and
we
have
further
looking
like
topic
looking
into
2022
other
than
that
we
should
probably
add
to
the
to
the
to
our
meeting
mats,
who
actually
added
topics
because
it
it's
for
me.
It's
just
a
bit
of
a
guessing
who's.
Actually
adding
topics.
A
A
B
Oh,
I
guess
I
put
this
on
there.
Wasn't
it
okay
yeah,
I
think
we
just
need
to.
I
haven't
actually
thought
about
this
at
that
time,
so
my
dad
right,
we
need
a
way
to
create
osd's,
backed
by
storage
classes.
So
I
think
that
the
current
device
inventory
obviously
isn't
sufficient.
We
need
there
needs
to
be
something
else
that
exposes
what
the
storage
classes
are
named,
at
least
so
I
guess
a
new,
probably
a
new
orchestrator
api
at
storage
classes.
B
I
guess
I
don't
know,
and
that
would
return
things
like
whatever,
whatever
the
source
classes
are
available
in
the
cluster,
yes,
they're
gonna
be
like
ebs,
provisioning
types,
probably.
B
I
don't
think
that
it
makes
sense
necessarily
to
like
try
to
make
this
mean
something
in
a
bare
metal.
Like
a
stuff
adm
case.
No,
I
mean
we
could
say
that
they're,
like
the
the
device
types
ssd
and
hdd,
maybe
like
you,
could
do
that.
A
We
we
have
that
in
in
the
inventory
in
the
for
the
inventory
I
get
already,
it
doesn't
really
make
sense
to
have
yet
another
api
called
the
returned.
The
same
thing.
B
Yeah,
okay
and
then
then
we
need
a
drive
group
property
storage
class
ooh.
Something
like
that
for
that
you
can
actually,
and
then
I
think,
hopefully,
the
existing
osd
creation
machinery
should
just
work.
B
Probably
somebody
who
understands
work
better
and
how
oc
creation
works
should
like
figure
out
how
that
works,
because
right
now
the
way
that
the
way
the
storage
classes
are
implemented
now
is
that
they
are
implemented
on
the
on
the
side
of
the
manager
module.
So
we
like
look
at
the
device
inventory
and
we
match
those
against
the
source
class
and
if
they
match,
then
we
go
trigger
an
ost
creation.
A
D
It's
so
in
the
we
have
a
storage
section
in
the
crd
and
we
have
a
set
where
each
set
can
represent
a
set
of
osds
and
inside
the
set
I
mean
each
set
is
like.
I
think
it's
just
like
a
bundle
of
of
disks.
If
you
want-
and
we
have
this-
we
have
a
volume
claim
template
that
we
have,
that.
We
add
in
the
set
and
points
to
a
given
like
storage
class.
D
And
that's
how
we
just
use
it
to
like
we
go
through
each
set
and
then
inside
each
set
for
each
or
incline
template
and
how
we
create
the
osd.
Essentially,
you
can
have
multiple,
I
mean
inside
the
volume
cam
template.
You
can
have
many
types
which
would
in
each
type
can
represent
like
if
it's
an
ssd
or
an
osd
or
or
or
an
http,
and
that's
how
we
do.
Data
versus
metadata
drives
detection,
essentially.
B
So
this
would
I
mean
this
would
be
a
one-to-one
mapping
right
like
one
of
those
items,
one
of
those
device
sets
would
map
one
to
one
to
that's
right,
yeah
and,
and
it
has
to
have
a
count
I
would
assume-
or
else
you'll
just
critique.
D
In
the
case
of
I
mean
it
is
easy
to
do
on
bermuda.
If
you
have,
if
the
storage
class
has
disks,
then
you
can
pick
up
any
disks,
but
as
soon
as
you
want
to
play
with
the
data,
then
you're
going
to
be
picking
up
entire
disks
or
yeah.
That's
how
it
will
work
today,
because
everything
is
governed
by
the
size.
I
mean.
If
you
look
at
the
volume,
clamp
template
yeah.
We
we
have
to
specify
a
size
which
gives
the
size
of
the
osd,
so
it
can
be.
D
The
size
could
essentially
be
the
disk,
but
the
size
for
the
metadata
should
be
rather
small,
which
again
in
the
cloud
is
really
very
easy
to
do,
because
we
have
elastic
storage
everywhere
and
everything
is
dynamically
provisioned.
So
it's
easy
to
ask
for
ebs
to
give
this
like
10
gig,
but
on
their
middle.
It's
a
bit
more
tedious
to
ask
for
a
low
piece
of
storage-
I
guess
or
metadata,
but
maybe
it's
just
too
advanced.
D
D
A
B
B
Yeah,
okay,
okay,
yeah,
yeah,
yeah,
absolutely
yeah.
This
seems
very
straightforward.
Actually,
as
long
as
we
just
need
to
make
the
validate
function
on
the
drive
group
require.
That
count
is
accompanies
the
storage
class.
If
it's
set
or
else
it's
invalid,
and
then
it
should
just
directly
map
to
a
item,
a
device
set
or
whatever
in
the
cluster
cr
right.
D
Yeah
and
for
now
I
think
we
only
need
to
have
a
single
set.
You
can
get
multiple
sets
if
you
want
to
have
different
properties
for
the
osds,
for
example.
Let's
say
you
want
to
have
a
set
of
policies
that
are
encrypted
and
the
set
that
isn't
are
where
always
these
are
not
encrypted.
This
is
how
you
would
use
sets.
I
guess
right
now
you
only
have
a
single
set
with
account
and
then
maybe
multiple
volume
claim
templates
inside
it.
If
you
want
to
have
a
metadata
blog
device,
yeah.
B
B
Okay
and
the
get
storage
classes
property
on
the
orchestrator
would
just
it
would
just
enumerate
all
of
these
large
classes
available
in
the
cluster
and
just
return
all
of
them.
Presumably,
are
there
any
like
properties
there
like
if
you're
enumerating
source
classes,
you
don't
know
like
anything
really
about
them
right
like?
Is
there
like
even
a
description
or
anything
or
is
it
just
a
name.
A
D
B
B
B
B
E
Yeah,
I
I
had
this
on
the
agenda
for
last
week,
but
we
didn't
get
to
it
before
I
had
to
jump
to
the
huddle.
I
see
that
there
are
some
notes
from
last
week,
but
yeah.
I
wanted
to
at
least
like
kind
of
briefly
bring
it
up
again
like
I.
E
E
I
know
sev
did
some
work
recently
to
try
to
help
us
catch
those
things
having
like
nightly
jobs,
that
test
upgrades
to
like
octopus
and
pacific
master
branches,
but
I
also
want
to
make
sure
that
it
is
at
least
somewhere
on
sep's
radar
to.
E
A
A
Do
you
have
a
list
of
of
breakages
that
we
call
that
have
cost.
E
E
And
I
think
one
of
the
other
things
was
when
ceph
volume
abruptly
dropped
support
for
partitions.
I
think
maybe
that
was
a
little
over
a
year
ago.
D
And
I
think
the
most
recent
one
is
that
we
all
of
the
sudden
one
container
image
just
disappeared
from
quay
and
because
our
entire
ci
was
down
because
the
image
wasn't
there
anymore.
E
D
I'm
yeah,
I
guess,
if
you
we
can
just
add
us
for
verification
for
qe
testing
when
or
just
before,
releasing
and
then
we
just
go
ahead
and
say:
okay,
we
have
tested
with
the
tip
of
specific
or
whatever,
and
it's
not
breaking
for
us,
so
we're
good
to
go.
I
think
that
should
be
simple
enough.
D
B
A
good
idea-
I
have
a
question,
though,
also
like
what
is
the
what's
the
big.
What
is
the
key
gap?
I
guess
with
what
we're
doing
right
now,
where
we
have
this,
because
we
have
a
test
suite.
That's
like
orch,
rook
suite
that
we're
running
now.
I
guess
we're
only
doing
it
on
master,
so
we're
not
doing
pacific
yet
but
like
that,
should
cover
most
things
right,
we're
running
against,
brook
master
or
rook
tip
or
whatever,
and
also
one
or
more
stable
releases
of
rook.
B
That's
just
a
smoke
test.
I
guess
right
now,
but
like
that,
should.
D
Yeah,
I
think
it's
fine.
I
think
it's
fine
and
also
just
like
I've
added
much
more
tests
in
our
ci
to
test
upgrades
between
stable
versions
and
running
on
master
and
latest
like
tip
of
specific
supervisor.
Please
do
so
we
should.
We
should
be
covered
on
our
site
as
well.
With
that
record,
should
we
add
an
upgrade
yeah,
I
think
that's
there,
which
upgrade.
Do
you
have
good
at
the
operator.
B
Yeah
like
right,
like
a
direct
tester,
deploying
the
whatever
the
current
version
of
stuff
is
the
tested
version
of
ceph
with
two
different
versions
of
work.
But
we
could
have
it
deploy
an
older
version,
an
older
theft
version
and
then
upgrade
to
the
current
version,
and
let
ruck
do
the
upgrade
and
make
sure
rook
upgrades.
D
But
yeah,
I
guess
I'll
just
go
and
ask
yuri
about
that.
If
we
can
be
included
because
yeah,
that's
yeah,
normal
age
should
always
be
okay,
and
I
mean
those
occurrences
were
pretty
rare,
but
I
think
it
would
be
good
to
kind
of
establish
better.
The
communication
channel
too
so
I'll
just
ask.
A
B
B
A
A
Review
it
approve
it
approve
changes,
maybe
not
merge
it
or
develop
it,
but
at
least
it
could
be
a
part.
D
But
we
do
review
already.
We
do
that
already.
I
think,
when
six
center
pr,
we
we
chatted
quite
a
lot
about
it
too
and
reviewed
so
that
we
do.
I
guess.
B
But
I
could
add
you
as
a
code
owner
or
whatever,
so
that
is
there
a
github
group
for
rookie
developers.
A
E
I
wonder
if
we
could
figure
out
how
to
like
chain
the
pipelines
together
at
all
to
like
build
a
ceph
image
and
then.
A
D
A
C
Sure
so
one
of
the
things
I
had
kind
of
sketched
out
on
ether
pad
some
some
thoughts,
but
one
thing
that
I
would
like
to
be
able
to
do
is
be
able
to
offer
the
current
either
either
the
current
strategy,
which
is
the
hp
proxy
based
approach,
which
would
be
like
a
l7
strategy
or
like
an
l4
strategy
which
would
use
ipvs
instead
of
h.a
proxy.
C
C
They
would
get
a
scalable
read
traffic
because
it
could
do
ipvs
can
do
direct
server
return,
so
you
can
have
kind
of
a
number
of
rgws
and
the
you
know
at
least
for
get
requests
the
the
size
of
the
the
request
that's
coming
in
through
the
vip,
which
is
always
going
to
be
a
single
node
in
this
case
is
small,
but
the
response
is
usually
big
and
with
h
a
proxy
at
least
with
l7.
C
You
can
configure
to
do
a
direct
server
return.
So,
while
the
get
request
comes
in
through
the
server
that
has
the
vip,
the
response
goes
directly
from
you
know
the
rgw
backup
to
the
client,
and
so
you
can
have
like
get
traffic
at
least
that
takes
up
you
know,
is
higher
volume
than
you
might
otherwise
be
able
to
support
with
a
single
node,
so
that
was
kind
of
the
the
the
next
step.
C
C
So
we
would
reuse
the
keepalovd
stuff
and
instead
we
would
just
change
change
out
the
aha
proxy
effectively
for
programming
ipvs.
C
You
know
which,
which
one
it
would
have
a
little
bit
different
options
than
aha
proxy
because
of
course,
like
you
can't
terminate
tls.
So
there
would
be
no
ssl
cert,
because
it's
all
l4-
and
there
are
you
know-
maybe
maybe
some
different
settings-
that
we
want
to
surface
in
terms
of
the
way
that
you
do
ipvs
but
yeah.
There's
a
good
article
that
I
posted
in
the
etherpad
from
cloudflare.
C
I
think
one
thing
that
we
want
to
be
cognizant
of
is
the
way
that
the
way
the
ipvs
does
direct
direct
server
return
is
by
rewriting
the
source
ip
address,
so
that
so
that
the
server
or
I
mean
the
destination
ip
address
from
or
the
destination
mac
address.
Sorry
from
the
vips
mac
address
to
the
mac
address
of
the
server
that
will
end
up
serving
the
traffic
and
and
that
only
is
going
to
work
over
if
the
machines
are
over
the
same
l2
network.
C
So
if
all
the
machines
that
are
participating
in
the
load
balancing
are
in
the
same
subnet,
the
way
that
the
cloud
filler
and
other
places
get
around
that
because
sometimes
they
might
have
a
step
cluster
that
is
like
on
three
racks
or
something,
and
each
rack
has
its
own
like
slash
24.,
you
want
to
be
able
to
have
traffic,
come
in
one
rack
and
then
come
out
another
rack.
They
kind
of
set
up,
they
set
up
tunneling
and
that's
what
the
udp
over
food
or
food
over
udp
tunnels
is.
C
So
it's
like
setting
up
setting
up
a
tunnel
to
be
able
to
send
the
the
you
know
the
rewritten
data
or
the
rewritten
ethernet
frame
over
that
and
then
it'll
come
back
out
directly
back
to
the
source
ip,
so
that
that's
an
additional
bit
of
complexity.
I
don't
know
if
we
want
to
take
that
on
as
sort
of
an
mvp,
but
I
do
think
it's
it's
something
that
that
we
do
see
out.
There
were
where
people
have.
You
know
three
racks
and
they
each
have
their
own
subnet,
but
yeah
interesting.
C
So
maglev
maglev
is
like
a
is
like
a
hashing
method
that
made
its
way
into
linux
kernel
for
ipvs
the
kind
of
the
the
thing
the
the
thing
that
does
similar
functionality
for
kubernetes
is
called
metal
lb
there
you
go
okay,
yeah
so
and
that
that
actually
has
two
different
strategies,
we're
doing
like
the
the
easier
one
first,
which
is
the
ipvs.
They
also
offer
like
a
bgp
based
one,
but
I
think
as
if
we're
working
up
from
like
the
maslow's
hierarchy
of
needs.
C
Let's
we'll
start
with
like
you
know,
we
have
aha
proxy,
then
ipvs,
and
then
we
can
figure
out.
You
know
something
more
sophisticated
later
for
people
that
have
like
they
want
need
to
be
able
to
scale
right
traffic
into
the
cluster
as
well,
and
I
have
some
thoughts
about
at
least
how
we
could
potentially
do
that.
But
I
think
this
is
a
good
step
in
terms
of
like
adding
load
load,
balancing
functionality
to
sub-adm.
C
C
So
what
extent
it
would
be
useful
for
me
to
try
to
get
kind
of
like
the
the
list
of
commands
for
the
actual
ipvs
configuration.
I
can
expand
upon
that
in
the
etherpad
in
terms
of
like
the
actual
implementation,
but
I
would
defer
to
one
of
you
if
they're
more
familiar
with
the
the
sub
adm
codebase.
A
A
Yeah,
let's,
let's
put
it
into
a
track,
I
did
it
already
and
that
might
be
a
topic
that
you
can
look
into
for
the
nsf
version.
That's
coming
after
quincy,
I
guess.
C
Okay,
I
just
wanted
to
plant
the
seeds
for
it,
so
yeah
yep
I'll
continue
to
elaborate
on
the
ether
pad,
and
I
see
you
have
the
tracker.
If
it's
use,
if
it's
useful,
then
we
can,
you
know,
make
a
card
on
the
community
trello
or
whatnot
for,
like
the
post,
quincy
planning.
C
A
Okay
yeah.
Thank
you
kai.