►
From YouTube: Ceph Testing Weekly 2018-11-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
D
C
So,
let's
see
I
just
I
promise,
we
talked
about
openSUSE
and
Python
3
and
installers
into
Thal,
G,
so
I
don't
know
a
lot
about
the
Python
3
one,
but
it
was
on
the.
We
came
up
when
the
leadership
team
meeting
last
week
and
in
and
it's
been
on,
the
mailing
list,
I
think
when
maybe
Brad
noticed
or
mailed
about
Python
3,
not
working
in
Fedora,
29
yeah,
and
then
alfredo
was
mentioning
that
it
was
concerning
that
we
don't
have
a
Python
3
only
environment
and
CPA.
C
C
E
Clearly,
clearly,
both
openSUSE
1500
and
15.1
can
be
run
in
a
Python
3
only
configuration
Python
2
is
entirely
optional
and
we
are
testing
we
are
building
stuff
in
in
openSUSE
with
the
Python
3
build,
so
it
doesn't
require
Python
2
to
install
it
and
but
but
I.
That
said,
there
are
other
things
that
might
be
in
the
system
that
sometimes
bring
Python
2
in
it's.
It's
we're
not
to
the
point
where
Python
to
no
longer
exists.
E
E
Would
I
would
dearly
love
to
be
able
to
do
that
so
I'm
in
it
and
I
was
I
was
very
enthused
when
I
read
somewhat
sometime
back
that
you
know
with
the
lens
grimmer
at
the
leadership
meeting
and
talking
about
getting
openSUSE
into
the
list
of
upstream
supported
distros
that
there
aren't
any
many
any
non
technical
obstacles
to
that
anymore.
So
really
the
only
obstacle
that
I
know
of
right
now,
given
that
David
Galloway
has
has
already
got
a
fog
image,
openSUSE
15
bog
image
working.
E
E
He
seemed
convinced
that
the
right
way
to
implement
it
is
using
something
called
Mach
which
is
kind
of
an
which
seemed
wasn't
completely
obvious
to
me
what
what
that
meant,
because
I'm
the
word
mock
you
know
is
the
testing
term.
You
know
a
mock
in
Python
means
the
Mokka
function,
but
but
there's,
there's
and
I
will
go,
find
the
link
right
now
to
that
project,
but
it
allows
theoretically
and
Lubuntu
system
to
build.
E
E
I
bet
that
said
I'm
when
we're
doing
we
do
foreign
builds
in
our.
In
our
downstream
testing
environments.
I
mean
there
are
some
cases
when
we're
forced
to
do
that
where
we
have
a
Jenkins
slave
that
runs
RPM
build,
but
but
for
a
different
different
operating
system,
then
the
Jenkins
live
is
actually
running.
We
use
docker
for
that
purpose.
C
Have
you
thought
about,
or
looked
at
just
on
calling
out
to
what
is
it
your
elastic,
build
service,
the
yeah
yeah,
the
open
service
to
pull
them
down
into
the
CP
lab,
and
this
I?
Have
it
I
just
really
know
that
worried
about
build
throughput
and
thought
it
might
be
easier
than
setting
it
all
up,
but
maybe
I
mean
super
hard.
E
E
C
You
know
we're
just
one
of
the
things
that
was
mentioned
when,
when
Don's
brought
it
up
is
you
know
we
have
a
certain
amount
of
build
throughput
in
it's
kind
of
close
to
where
did
to
its
Maddy.
So
I
think
this
troll
bubbly
require
new
hardware,
which
may
be
easier
now
that
there's
a
foundation
with
money,
or
maybe
we
can
get
syriza
to
pony
up
for
a
couple
of
servers
to
build
their
OS
or
something,
but.
C
F
E
E
C
We
should
not
treat
that
as
an
infinite
pool,
I'm
thinking
it
to
me
and
also
yeah
okay,
so
hopefully
one
of
those
will
come
along.
We
also
do
David
I
think
last
week
or
something
put
a
rail
lead
image
in
the
lab,
not
that
anyone's
done
a
lot
of
work
trying
to
test
up
on
relate
yet,
but
hopefully
the
pipe.
The
guys
who
were
more
worried
about
the
Python
can
will
have
options
soon.
So
that's
good.
C
So
moving
on,
I've
been
looking
at
the
tooth
ology
installer
interfaces
that
the
subtask
used
and
I
haven't
gotten
as
far
along
into
what
the
existing
sensible
tasks
actually
do
as
I
wanted,
but
I've
gotten
a
good
look
in.
So
for
those
of
you
who
aren't
aware,
we
have
a
sensible
task
in
the
upstream
sequence,
which
can
install
seven
sensible
and
there's
a
small
suite
that
uses
it
and
a
couple
of
substrates
to
do
red
hats.
C
Internal
QE
group
has
a
much
more
capable
version,
but
that
they
use
for
downstream
testing,
but
that
is
but
the
task
itself
is
publicly
accessible.
It's
and
I
think
it's
an
RH
branch
of
the
tooth
ology
repository
and
they
have
some
sweets
as
well.
Those
aren't
in
public
I,
don't
think
so.
I've
been
sort
of
looking
at
how
the
ants
will
task
differs
from
the
stat
tasks
and
the
the
deltas
a
lot
smaller
than
I
thought.
C
It
was
one
of
the
things
I
am
a
little
fussed
about
is
how
we
do
the
configuration
so
that
most
of,
like
the
actual
ya,
know
pregnant,
so
that
most
of
the
tasks
are
or
most
of
the
other
four
don't
care
about
what
environment
they're
in
mostly
that's,
not
a
huge
deal.
I
think
we
just
like
set
up
an
installer
frakkin
in
the
industry
directories
that
you
know
that
ansible
or
set
and
then
make
and
then
like
hide
them
behind
a
you
know.
Second,
Staller
abstraction
task
that
that
swabs.
C
What
is
a
little
interesting
is
that,
and
that
seems
to
work
from
from
how
sensible
is
being
deployed
in
the
streets.
I've
looked
at
my
main
concern
and
I
think
it
won't
be
a
big
deal,
but
I
haven't
validated.
This
really
well,
yet,
is
that
a
lot
of
the
tasks
do
specify
specific
set
config
options,
so
we
need
to
make
sure
that
things
like
we
need
to
provide
the
wrapping
in
the
different
installer
tasks
to
pass
that
out
to
the
nose
and
I
assume,
while
this
Installer
support
that
really
easily.
C
But
if
one
of
them
doesn't,
then
it's
gonna
be
a
bummer
and
all
right.
The
other
thing
I
haven't
done
yet
is
gone
and
checked
the
standard
in
and
standard
out
on,
the
demons,
but
the
the
Red
Hat
that
bans
will
task
does
set
up
demon
cluster
objects,
though
I
think
that
probably
all
works
the
way
we
needed
to
which
is
exciting
hooray.
So
it's
not
real
fast
progress,
but
that
is
continuing
on.
Hopefully,
he'll
do.
C
G
The
common
installer
method
is
I
think
we
can
just
have
one
anonymous
installer
task
and
we
can.
Everybody
can
just
use
that
anonymous
installer
function
as
to
call
their
own
internal
install
functions,
and
we
can
just
have
one
overrides,
which
I
mean
I
think
it's.
This
is
what
my
thinking
is.
If
anyone
disagrees,
I
mean
or
has
other
method,
maybe
they
can,
they
can
do
the
input.
G
So
so
we
can
just
call
it
as
a
staff
installer
and
then
the
overrides
can
also
be
safe,
installer
and
if
Suzy
Suzy
wants
to
just
call
its
own
function
and
it
can
use
its
overrides
and
call
the
installer
function
and
we
can
someone
in
downstream
if
we're
interested
in
a
fence.
Well,
we
can
call
the
second
star
method
and
sensible
rights.
C
Yeah,
similarly,
the
plan-
it's
just,
we
want
to
make
sure
that
we
don't
that
the
interface
actually
supports
that
and
it
doesn't
leak
through
what
finish
you're
using
because
right
now
it
very
much
does
so.
We
need
to
set
up
sort
of
the
API
expectation
endpoints,
and
there
are
a
few
other
things
that
need
to
go
in,
like
the
step.
Ansible
task
doesn't
check
the
the
daemon
logs
for
the
sub
cluster
logs
for
for
health
warnings
and
things.
C
E
I've
been
yeah,
you
know,
I
haven't
been
talking
about
this
with
anybody
or
until
today,
but
I've
been
thinking
about
your
and
I'm
glad
that
you're
not
calling
it
an
installer
tasks.
How,
as
I,
haven't
I've,
been
at
a
loss
as
to
what
to
call
it,
but
this
installer
abstraction
task,
which
will
will
then
all
out
to
the
different
in
what
you
call
installers
or
what
we
call
external
orchestrators
deep-sea,
set
ansible
the
stuff
tasks,
and
now
there's
also
rook.
Coming
did
this?
E
E
E
My
understanding
it
is
is
that
in
the
future,
when
this,
this
Orchestrator
module
becomes
a
thing
instead
of
instead
of
running
deep
sea
commands,
directly
or
SEF
ansible
commands
directly
or
rook
commands
really
will
be
able
to
issue
a
set
MGR
Orchestrator
command
and
not
really
care
about
what
the
underlying
Orchestrator
external
Orchestrator
is,
or
what
the
underlying
installer
is.
So
I
was,
I
was
the
heretical
notion
that,
in
my
mind
is,
could
we
somehow
push
both
of
these
things
forward
together
this
installer
for
tooth
ology
and
the
manager
Orchestrator
module?
That's.
H
C
A
good
question
and
I've
been
thinking
about
it
a
little
bit
and
I
think
I
think
it
was
discussed
a
while
ago
and
maybe
more
in
the
future,
but
but
the
there's
a
couple
of
things
that
stop
it.
First
of
all.
Right
now
the
manager
still
has
a
bootstrapping
problem.
It
needs
something
to
install
it.
The
first
time
around
and
mm
also
needs
something
to
install
it.
The
first
time
around.
C
E
Both
of
those
both
those
and
my
idea
was
for
the
first
one
as
I
understand
it,
seduction
again,
will
directly
it's
possible
for
the
orchestrator
module
to
run
when
there
is
a
cluster
with
one
consisting
of
one
Mon
and
one.
And
but
if
we
had
a
if
we,
if
we
had
a
bootstrap
task,
that
that
deployed
a
single
Mon
and
a
single
manager,
then
we
could
then
hand
over
to
the
orchestrator
yeah.
C
And
I
mean
and
then,
if
that's
done
fast
enough
and
cool,
but
hopefully
this
takes
less
time
then
well,
I,
don't
know
how
long
we
think
the
orchestrator
is
gonna
take,
but
it's
still
pretty
nascent.
The
other
thing,
of
course,
is
that
I
don't
know
your
company
imperatives
are
but
like
we
need
to
keep
supporting
step,
ansible
in
Red,
Hat
and
so
like
for
a
while,
and
so
these
sorts
of
things
aren't
gonna
go
away
and
I
would
like
to
converge.
C
The
tooth
ology
instance,
like
I,
would
like
to
reduce
the
number
of
tooth
ology
Forks
in
the
world.
That's
part
of
what
I'm
going
on
here
and
so
the
more
we
make
them
work
for
more
environments
like
this,
the
better
so
I
definitely
look
forward
to
working
with
the
manager.
But
it's
not
yet
like
anything,
I'm
I
think
I
can
replace
anything
working
right.
E
Well,
I
mean
it
sounds
like
you're
already
implementing
this
installer
task.
You
know,
or
at
least
you
know,
blocking
it
out
on
well,
you
know
it's
so
good.
So
then
integrating
it
with
the
manager,
Orchestra
module
is
something
that
could
be
done.
Subsequently.
Yeah
yeah,
but
I
I
am
keen
eye.
I
am
peeing
on
on
being
your
code
as
soon
as
it's
ready
to
be
seen
so,
like
I,
can
think
about.
You
know,
input
getting
it
to
work
with
deep
sea,
because
we
also
have
the
corporate
imperative
of
testing
our
in
Nellore.
G
A
Did
we
lose
the
bastion
III
was
I
was
done
and
cuz?
There
are
more
questions.
C
C
I
A
A
I
C
Well,
and
also
for
for
things
like
what
once
we
can
like
invoke
rook
from
within
it,
then
this
is
easier,
but
for
things
like
sensible,
we
still
would
need
to
have
to
theology
provide
defensible
with
the
the
manifests
or
whatever
they're
called
of
nodes
that
I
can
use
for
doing
those
deployments.
So
there
will
still
be
work
to
happen
outside
of
energy.
Does
yeah.
I
But
but
I
would
assume
that's
the
same
result
and
even
for
not
for
kubernetes
but
Ronettes.
It
actually
handles
that
way.
Basically,
you
you
will
be
installing
or
is
the
equivalent
process
trance.
Alright,
then,
as
soon
as
you
have
that,
in
whatever
is
Yorkist
waiter
will
be
able
to
know
what
the
closer
looks
like
some
extent
right,
yeah.
I
I
I
E
E
Instead
of
saying
we're
going
to
wait,
you
know
for
the
orchestrator
module
to
emerge
things.
It
seems
realistic,
more
realistic
to
do
it
in
two
steps
and
as
as
Rick
says,
you
know
the
the
you
know,
some
and
and
as
I
say
that
the
into
tholly
it's
it's
never
going
to
be
just
a
matter
of
the
orchestrator
module,
is
not
never
going
to
do
everything
that
tautology
needs.
A
F
I
A
C
Well,
but
it
does
mean
that,
for
instance,
to
ecology
needs
to
support
running
with
some
ansible
or
deep-sea
or
rook,
so
that
the
manager,
when
it
gives
the
ergosphere
to
command,
has
an
Orchestrator
to
talk
to
so
in
this
world.
It
would
be
like
the
the
orchestrator
web,
you
determined
by
whatever
you
say
in
the
Ceph
installer
task,
which
ever
one
you
specify
and
it's
used
to
initially
set
up
the
cluster
and
then
the
the
management
Orchestrator
interface
test
would
just
invoke
management
or
the
orchestrator
management
commands,
and
they
would
call
out
to.
E
Could
turn
this
F
existing
set
task
into
an
external
Orchestrator?
That's
capable
of
working
with
the
external,
with
the
orchestrator
module,
maybe
yeah
I
mean
that's
that's
one
option
and
then
the
other
option
is:
is
the
capability
of
Port
Authority
to
install
and
and
configure,
and
some
other
external
Orchestrator
for
deep
sea?
That's
already
finished
downstream,
but
it's
blocked
by
not
having
openSUSE
upstream.
C
C
I
C
E
E
E
The
thing
the
thing
is
is
that
shaman
shaman
pulls
that's
the
SEF
CI
repo,
looking
for
working
at
branches
that
appear
there
and
then
and
then
it
builds
them
right
and
so
adding
first
of
all,
shaman
doesn't
know
how
to
build
or
openSUSE
leap.
15
right
and
second,
it
can't
farm.
The
task
go
out
to
the
open
sea
to
build
serve.
Visiting
the
openSUSE.
Build
service
doesn't
doesn't
allow
that
kind
of
use
with
with
a
robot
triggering
a
build
I.
E
Well,
no
I
mean
I
beat
it;
no,
it
would
be
only
for
OBS.
Obs
is
a
public
resource,
so
we
don't
Susa
doesn't
doesn't
want
external
things.
You
know
totally
taking
over
the
resources
right
used
by
robotically
triggering
millions
of
bill
job.
That's
why
that's
there
and
theoretically,
that
could
be
convened
as
something
a
robot
triggering
millions
of
bill
jobs.
That
means
realistically
that's
what
could
possibly
happen.
E
I
I
E
C
J
J
E
I
E
J
A
E
Those
we
already
have
code
right,
incest,
build
that
that
I'm
not
familiar
with
the
code
that
it's
one
of
the
things
that
does
is
it
pulls
the
Ceph
CI
repo
right
looking
for
branches,
new
branches
in
it,
if
it
if
it
sees
a
new
branch,
it
builds
it
for
all
of
the
supported
operating
systems
and
that's
something
I
believe
that's
what
Seth
bill
does.
So
it's
a
matter
of
patching
set
build,
do
to
build
for
openSUSE
leap
in
addition
to
Lubuntu
and
sent
OS
whatever
else
it
builds
for
and
then
shaman
will
and
some
I.
E
I
I
F
If,
if
it's
capable
of
building
for
openSUSE
sure
one
of
the
questions
I
had
was,
do
you
do
you
need
or
I
guess,
want
packages
for
every
SEF
CI
branch
for
openSUSE?
Or
do
you
only
want
to
test
specific
branches.
E
E
E
C
We
have
developers
who
actually
like
are
involved
with
that
distribution
who
are
on
the
set
project,
so
it
pumps
a
lot
higher
up
on
on
the
distribution
already
list,
some
things.
So
you
know
Abby.
It
would
be
nice
if
we
could
just
build
it
all
the
time
in
the
same
way,
and
this
random
looks
like
just
select
between
it
and
I've
been
doing
it
and.
C
D
One
thing
I
wanted
to
add
is
this:
is
about
using
the
Ceph,
installer
orchestration
and
all
all
of
that
changes
that
we're
trying
to
make
and
etiology.
If
we
don't
already
have
a
tracker,
it
would
be
really
good
to
have
one.
So
you
know
everyone
can
add
in
there
and
we
can
make
sure
that
we're
all
thinking
on
the
same
page.
So
just
my
two
cents.
Oh.
D
J
So
I
hope
couple
questions.
One
of
them
is
our
current
upstream
doji
is
using
lip
cloud
or
1.5,
which
is
pretty
outdated,
and
it's
probably
can
be
used
with
our
our
cloud
because
it
has
new
API
and
API
version.
V3,
OpenStack
and
I'm
going
to
spend
some
time
for
making
a
punch
which
will
use
newer
version
of
lip
cloud.
C
I
can
champion
that
for
UI
I
won't
be
able
to
do
everything
I
think.
Usually
what
happens
for
changes
like
that
is
just
like
the
branch
gets
like
a
branch
gets
pushed
and
then
some
of
the
mm
you
were
cur
processes
gets
switched
to
it
and
if
they
don't
break,
then
good
in
otherwise
we
roll
it
back
and
revert
the
and
revert
the
patch,
which
is
definitely
not
the
most
comfortable
or
best
way
to
to
handle
it.
But
it's
what
we
have
right
now.
Unfortunately,
yes.
J
G
J
I
will
merge
it.
I
I
will
create
a
PR.
The
next
problem.
Is
it
if
it's
ever
possible
to
you
to
run
the
tasks
on
the
system
that
I
do
not
support
like
red
head
or
something
I
mean
all
of
that,
and
another
thing
is
how
the
upgrade
will
happen.
So
it's
three
kind
of
testing
the
first
of
I
can
provide.
The
second
I
need
some
help
and
probably
for
the
third
one
is,
should
be
someone
who
is
managing
GPS
structure
infrastructure.
It
no.
C
Mean
I
think
we
just
have
to
pick
a
time
and,
like
click
the
button
and
then
go
start
read,
and
then
your
restarts
and
processes
and
watch
those
processes
to
see
if
they
succeed
or
fail,
because
I
mean
it's
just
like
we've
got
our
pool
of,
however
many
toots
worker
processes
and
those
run
with
what
they've
got,
and
so
we
can
take
them
and
give
them
the
new
code
and
then
make
sure
that
they
don't
break
on
it.
But
that's
I
agree.
C
C
C
J
E
J
B
J
B
Don't
know
occur,
it
can
help,
but
I
think
I
spy
I
told
that
before
so
typically,
this
ology
related
fixes.
Since
we
don't
have
a
separate
the
bridge
environments,
what
we
can
do
if
you
can
meet
a
change
and
you
tested
it
on
your
level
and
you
believe
that
it's
looking
good,
we
can
actually
schedule.
You
know
subset
small
subset
of
Suites
in
sepia
on
that
tautology
branch
and
run
it
through
and
wash
it
for
a
day
or
two,
and
you
know
we
or
or
we
can
actually
manually
scheduled
some
Suites
on
that
branch.
B
C
J
So
the
problem
with
the
upstream
I
don't
know
how
the
CPR
lab
infrastructure
isn't
organized,
but
the
problem
is
when
I
update
the
requirements
while
on
set
up
pipe
I
will
increase
the
number
of
leap,
clouds,
stop
and
I.
Don't
know
if
the
sepia
will
update
the
virtual
environment
or
install
new
versions
of
libraries,
yeah.