►
From YouTube: Ceph Performance Meeting 2022-12-01
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
B
Here,
yes,
that
was
me
I
added
that
let
me
start
with
that
then
yeah.
So
this
is
something
I
kind
of
noticed
after
kind
of
skimming
through
some
of
the
emails.
After
returning
from
some
time
off
that
we
had
another
one
of
these
events,
where
a
bunch
of
things
in
the
tests
I
think
broke,
because
the
Upstream
had
a
major
version
number
bump
and
they
they
changed
certain
things.
I
think
it
was
like
eight
I'm,
not
entirely
sure.
I'm
I
ran
from
one
meeting
to
this
meeting,
so
I.
A
B
Yeah
I'm
just
a
little
running,
yeah
I'm,
not
even
half
the
minutes
up,
yet
those
are
the
wrong
minutes
anyway.
This
is
just
one
of
those
things.
I
wanted
to
talk
about,
because
I
I
noticed
that
it
was
one
of
these.
It
was
pinned
with
an
equals
equals
which
is
fine
for
a
short
term
thing,
but
I
get
a
little
nervous
when
I
see
these
when
I
start
going
well.
What?
B
If
we
don't
remember
about
this
and
then
the
next
thing
we
know-
or
you
know,
we've
been
on
some
old
version
of
flake
8
or
my
pie
or
whatever
it
is,
for
you
know
three
years
and
the
world
has
moved
on,
but
we've
been
kind
of
Trapped
in
Carbonite
for,
however
long
because
we
we
did
an
explicit
version,
pinning
I
was
wondering
if
we
shouldn't
have
kind
of
like
a
a
general
like
it
doesn't
have
to
be
super
strict,
but
some
kind
of
General
policy.
B
That
says
you
know
if
a
tool
we're
using
has
a
major
number.
You
know:
break
yeah,
it's
okay,
to
pin
a
version
but
try
to
only
pin
a
major
version
unless,
like
you
know
that,
there's
a
problem
with
the
minor
versions
and
then
you
know
what
do
we
do?
Do
we
try
to
get
working
on
the
new
version,
or
do
we
like
you
know,
is
that
something
that
we
want
to
do?
And
that's
you
know
I
just
want
to
throw
that
out
there
for
discussion.
A
At
least
when
I
like
to
play
gate
I
feel
like
it
would
be
nice
to
be
going
on
the
more
recent
version.
I
didn't
go,
look
at
what
actually
broke
right,
whether
it
was
something
big
or
not,
but
obviously
I'm,
assuming
they
pinned
the
version,
because
people
who
could
have
been
around
to
fix
it
were
all
away.
B
Yeah
yeah
and,
like
I,
said
I'm
I'm,
I'm
I'm
personally,
okay,
with
doing
a
pin
just
to
like
get
get
over
the
hub
for
a
week
or
two
or
maybe
even
three,
but
once
you
start
moving
on
beyond
that,
it's
like
the
knowledge,
is
not
going
to
be
fresh
anymore,
and
if
we
forget
that
that
pinning
is
there,
you
know
we
could
be
shooting
ourselves
in
the
foot
and-
and
the
other
aspect
was
what
was
I
just
gonna
say.
B
If
we
do
pin
versions,
it
might
be
good,
maybe
in
the
docs
I
don't
know,
might
be
good
to
have
like
a
central
location
to
keep
track
of
it.
All
because
we
have
multiple
Fox
ini
files,
which
is
kind
of
you
know,
makes
it
hard
to
find
all
the
things.
B
A
Yeah
I
got
what
you
mean
like
I
kind
of
agree
that
it
feels
like
we
should
have
something
a
bit
more
to
what
we're
doing,
but
I'm
not
exactly
sure
what
that
is
either
because
there's
so
many
dependencies
and
it
seems
like
there's
different
people
who
are
more
aware
of
how
to
handle
each
each
form
and
what
each
one
does
yeah.
So
it's
hard
to
have
like
a
a
policy,
because
we
have
to
get
everyone
to
actually
stick
to
that
policy
right
and
I.
A
A
A
I
mean
I
feel
like
for
those
ones
we
can
leave
them
sort
of,
as
is
or
maybe
we
want
to,
pin
them
more.
We
find
a
stable
version
that
works
for
them
and
then
for
Main
I,
don't
know.
Maybe
we
try
to
pin
two
again
just
major
versions
and
see
if
we
can
try
to
upgrade
it
and
we
have
to
still
keep
in
public.
A
We
have
to
have
like
almost
schedule,
some
I
don't
know
like
once,
every
three
months
or
six
months,
we
have
to
look
at
the
dependencies
and
see
if
there's
new
versions
or
something
which
would
advice,
require
sort
of
what
you're
talking
about.
We
have
to
have
like
a
list
somewhere
of
the
ones
you're
worried
about
and
be
able
to
like
look
at
them
regularly,
but
I
think
we'd
have
to
schedule
almost
schedule.
Something
I,
don't
think.
If
we
just
leave
it
like
that,
I
don't
know
how
often
we'd
actually
I
don't
know.
B
I
said
I'm
just
kind
of
putting
it
out.
There
is
Food
For
Thought.
A
Yeah
wait:
wait
if
you
want
to
have
like
a
doc
screen,
because
if
we
want
to
ever
actually
do
something
like
this,
where
we
say
of
a
regular
time
where
we
go
through
and
try
to
upgrade
the
dependencies,
we
do
need
to
start
with
putting
down
what
the
dependencies
are,
and
so
we
could
have.
We
have
parts
of
the
docs
that
are
more
like
developer
stuff,
but
we
could
probably
put
that
there
like
make
some
page
yeah
I.
B
B
A
Yeah
I
mean
that
seems
like
it
would
be
a
good
starting
point,
though
I
mean
you
just
have
somewhere
where
you
can
say
like
this
is
what
we
have
right
now
and
I
guess.
The
next
thing
would
be
again
if
looking
through
them
and
then
I
try
to
have
some
sort
of
regular
I,
don't
know,
let's
go
through
the
dependencies
and
see
if
we
can
bump
any
of
them.
A
If
there's
been
any
more
major
releases,
yeah
yeah,
oh,
and
that
could
be
the
way
we
do,
because
we
could
use
pinning
for
them,
because
we
don't
have
to
worry
about
breaking
them
if
we
are
able
to
dedicate
to
upgrading
them
like
that.
So
if
we
do
this
thing,
where
we
have
some
script,
that
points
them
all
out
and
then
we
say
like
every
three
to
six
months,
we're
going
to
go
through
these
and
see
if
we
can
pump
any
of
them
has
been
any
more
major
releases
in
any
of
these.
A
These
things,
then
we
could
pin
them
and
it
would
like
to
solve
the
problem
of
things
randomly
breaking
and
then,
if
we
were
actually
good
about
it,
it
would
still
work
for
keeping
things
up
to
date.
B
Sometimes
it
it's
hard
to
like
proactively
find
those,
but
at
least
if
we
do
pin
something.
We
can
then
proactively.
Look
to
remove
the
pins
in
like
in
in
this,
like
quarterly
cycle.
B
Sure
sure
yeah
you
can,
you
can
explicitly
go
from
five
to
six,
but
the
only
issue
is
then
you
have
to
look
at
all
your
all
your
pins
and
say
well.
What
has
Upstream
moved
on
to
because
we
moved
from
five
to
six,
but
if
they
move
from
six
to
seven,
you
know
in
another
month
or
two
then
you're
like
you're,
constantly
having
to
keep
up
I,
don't
know
I'm
just.
A
B
A
It
works
because
I
don't
want
to
be
I,
don't
want
to
track
those
either
I.
Don't
want
to
worry
about
when
they're
releasing
things
I
would
just
then,
if
we
had
a
nice
place
that
like
filed
like
this,
is
the
dependencies
we
have
and
this
is
the
virgins
are
pinned
to,
and
there
was
some
easy
way
to
go
check
if
they
have
new
versions.
A
We
can
do
it
like,
maybe
all
at
once
and
just
see
if
we
can
have
a
full
request,
see
what
happens
if
I'd
update
some
of
them
do
a
bit
of
testing
there
see
which
ones
are,
are
easy
and
we
can
just
update
them,
and
maybe
some
of
them
are
harder.
Maybe
I
don't
know
yeah.
B
A
Back
because
yeah,
because
if
we
get
something
like
that
set
up
like
some
infrastructure,
like
it's
kind
of
like
how
we
didn't
put
the
the
coverage
report
for
the
the
binary
where
it's
we
don't
run
it
regularly
or
anything,
but
it's
there
that
can
like
check
what
our
dependencies
are
and
then
check
what
the
most
recent
version
is.
A
If
we
just
again
like
three
months,
every
three
months
or
six
months,
got
together
and
ran
that
and
said
like
this
one
as
more
new
major
version,
you
just
see
it
on,
like
whatever
output
we
have
and
we
could
try
updating
them,
and
that
could
maybe
be
a
way
of
handling
this
without
worrying
about
it
breaking
other
people's
stuff,
because
the
version
happens
to
be
increased
and
it
doesn't
work
with
what
we
have.
B
A
I,
like
that
yeah
yeah
I
mean
that's.
The
big
part
of
that
is.
We
have
to
get
the
infrastructure
set
up
first
and
we
need
some
scripts
to
get
our
list
of
dependencies.
What
versions
We
have
and
I
guess?
We
need
some
way
of
getting
the
most
recent
version
of
those
dependencies,
but
if
there's
a
way
to
do
that,
then
that's
what
I'll
be
really
nice.
A
To
get
back
to
I
was
going
to
talk
about
the
testing
stuff
a
little
bit
in
here
right
this
topic,
but
I
can
at
least
take
it
like
a
kind
of
tracker.
B
A
Have
a
few
of
those
things
as
well
like
there's
still
one
for
the
way
we
handle
text
differences.
A
I
know
you,
you
looked
at
it
a
little
bit
like
having
like
a
a
different
life:
oh
yeah,
and
the
tests
yeah
yeah
I.
Remember
everybody
to
explain
that.
Yeah,
like
the
difference
in
the
text
that
compares
to
two
different
things
like.
B
A
Like
we
have
a
cleanup
tracker
for
that,
so
it's
still
there,
so
we
don't
forget
about
it.
So
we
do
the
same
thing
for
this
we'll
get
we
can
get
to
it
at
some
point.
For
now
it
looks
like
the
tests
are
passing,
at
least
so
nobody's
blocked
anything,
and
we
can
try
to
come
up
with
something
later.
Okay,
so
we'll
have
the
cleanup,
tracker
and
I'm
going
to
get
this
automation
going.
A
Yeah
so
I'll
put
some
stuff
in
the
dock
or
in
the
meeting
minutes
after
I'll
link.
The
tracker
I'll
make.
A
A
I
already
linked
the
documents,
I
was
in
a
downstream
Google
chat
room,
though
maybe
I
can
share
the
doc
with
you,
Mike.
Well,
first
I'll
link:
this
is
the
test
results.
They
did
a
main
Baseline
I
think
it
was
the
23rd
it
ran
on.
It
looks
like
they
fixed
the
stuff
sort
of
mid
to
late
last
week,
so
I
went
through
all
of
the
test
failures
in
there
at
least
the
orange
stuff,
ADM
ones.
A
I
ignored
the
orange
Rook,
because
those
were
pretty
much
broken
already
well,
as
I
mentioned
some
things,
I
mentioned
before,
so
there's
a
bunch
of
dead
jobs
in
there,
but
I
don't
think
we
have
to
worry
about
those.
They
were
all
like
ever
reimaging
machines,
it's
sort
of
out
of
our
control
sort
of
ignore
those
ones.
Then
I
went
through
the
failed
jobs
and
I
came
up
with
Devin
failures.
A
And
like
I,
just
shared
it
with
your
your
Gmail
address,
I've
sent
stuff
to
you
before,
just
some
random
doc
that
I
put
together
these
failures.
Yeah.
B
A
I
see
it
so
I
was
looking
through
that
this
morning
and
they're
seven,
although
some
of
them
are
much
more
frequent
than
others
like
the
first
one
on
the
list,
which
is
that
we
seems
like
we
can't
get
any
mounts
to
work
on
our
NFS
exports
at
all.
It
just
always
seems
to
fail
for
multiple
different
reasons.
I
put
a
couple
of
them
in
that
that
first
point
there.
A
B
Was
just
gonna
say:
I
had
started
to
look
at
that
about
15
minutes
before
the
meeting
started.
My
my
main
question
on
that
before
you
move
on
to
other
stuff
is:
are
any
of
them
passing
or.
A
A
Yes,
exactly
what's
your
name
is
I
even
mentioned
in
there
like
the
fifth
one
on.
There
is
something
we
actually
have
a
fix
for
and
a
pull
request,
that's
open,
but
not
merged.
Obviously
they
were
just
running
main
branch,
so
I
can
see
that
some
of
the
normal
NFS
tests
have
passed,
but
not
any
of
the
NFS
Ingress
I
needed.
Let
me
check
the
logs
of
one
of
these
real,
quick
and
see
if
these
actually
do
that
mount
or
not.
B
A
B
Of
the
errors
was
just
like
it
looked
like
anifis
version,
mismatching
and
I
was
wondering
if
I
was
going
to
start
looking
at
which
ones
failed
on
which
distros
and
so
let's
try
and
see.
If,
oh
all,
all
the
NFS
version
mismatches
where
you
want
to,
but
the
other
ones
were
sent
to
last
or
something
I
knocked
out
that
far
and
then
I
got
interrupted
by
a
different
Downstream
task.
A
Yeah
I
have
nothing
looked
in
detail
at
them,
yet
I
was
going
back
and
actually
looking
at
some
of
I
have
a
bunch
of
pull
requests
that
have
review
comments
that
I
have
not
addressed.
I
was
going
to
activate
some
of
that
stuff.
I
was
going
to
come
back
to
this
after,
but
I
I.
Don't
think
the
tests
that
passed
at
our
NFL
actually
do
any
of
this
mounting
I
think
they
just
like
they
just
deploy
it
or
something
like
that.
I
think
the
mounting
is
failing
all
of
them.
A
I
know
some
of
the
ones
that
fail
they're,
definitely
not
doing
Ingress
they're
just
NFS
as
well
I'm,
pretty
sure
they
are.
There
are
NFS
tests
that
have
passed
again,
I,
don't
think
they're
doing
this
mounting
I'm
pretty
sure
this
is
failing.
University
okay
could
be
wrong,
but
it
seems
to
be
that
way.
At
least
well.
I
checked
the
one
at
least
one
of
the
ones
that
passed
and
it
didn't
happen
to
have
any
of
the
mounting
in
it.
I
feel
like
that's,
probably
not
a
coincidence.
A
I
think
we
do
have
a
few
tests
that
just
make
sure
we
can
deploy
it
fine
and
they
don't
actually
do
much
with
it.
A
Yeah,
so
there's
that
one,
so
that's
a
big
one,
I
think
it's
almost
half
of
the
failures.
The
second
one
I
was
a
bit
confused
by
it
ends
up
with
it
can't
find
the
Stef
ADM
file
that
is
supposed
to
pull
I
have
some
it's
not
very
nicely
formatted
in
there,
but
I
have
a
big
piece
of
the
output
in
there.
It
eventually
comes
up
to
unable
to
execute
home
Ubuntu
idiom,
no
such
filer
directory.
It
looks
before
that.
A
It's
trying
to
pull
something
from
Chakra
I
didn't
look
still
exactly
why
it's
failing
or
what
really
happened
there,
but
I
thought.
Maybe
it's
possible.
That's
that's!
What's
going
on,
I,
couldn't
pull
it
for
some
reason,
but
I
believe
that
one
only
happened
once
and
there's
over
100
tests
and
I
assume
a
multiple
of
them
were
trying
to
do
this.
So
maybe
it
was
just
a
fight
thing.
A
You
might
not
have
something
about
that.
Didn't
worry
about
that
much,
but
it
did
happen.
So
I
included
that
one
as
well
yeah
that.
A
I
said
that
one
might
have
just
been
a
weird
one-off
because
it
failed
on
a
thrash
test
as
well.
I
believe
that
was
the
only
thrash
test
that
was
in
the
failures,
and
it
failed
into
like
this
weird
way,
but
we'll
see
I
guess
if
that
one
comes
up
more,
if
it's
only
doesn't
drop
at
all
or
it's
extremely
rare,
maybe
we
can
sort
of
ignored
for
now,
or
maybe
it
is
some
problem
with
some
of
the
infra
is
still
not
totally
working
properly.
A
A
Three
cannot
stop
at
sea
containers.
Registries.Com
I'd,
look
into
this
one
really
at
all.
I
think
I've
seen
it
before,
but
I,
don't
know
what
causes
it
to
happen
twice
in
this
run
it
both
the
same
distro
thought
it
happened
twice.
No
there's
one
Centos,
eight
dot
stream
and
one
rail
8.6
totally
different
tests.
One
is
an
MDS
upgrade
sequence:
one
of
them
was
just
a
smoke
full
list.
I
just
deploys
a
handful
of
things
doesn't
really
do
much.
A
Those
seemingly
totally
two
unrelated
tests
had
this
failure
again.
I,
don't
know
if
this
is
just
some
weird
flicky
thing,
so
we
did
we'd
have
17
tests
to
fail,
reimaging
machines,
so
it's
possible
that
there's
some
other
things
that
were
just
wrong
with
the
infrastructure.
Still
that
we
can't
really
fix
on
our
side.
A
A
I
think
the
exact
same
test.
It
looks
like
yeah,
so
the
Rel
smoke
tests,
I
I,
didn't
check
up
any
of
those
paths.
Yet
I
have
to
look
at
that.
One
I
mean
it's
just
installing
a
package,
so
it's
just
maybe
something
you
have
to
fix
on
the
initial
setup,
part
of
the
test,
or
maybe
just
somewhere
in
the
image,
for
they
did
put
up
new
images
of
all
the
the
the
just
rows
up.
They
did.
A
A
A
But
they
basically
like
they
fully
re-image
the
machine
whenever
you're
starting
a
test,
so
they
have
these
images
somewhere
that
get
used
and
they
get
put
on
the
machine
when
the
before
the
test
starts,
but
they
had
to
re
make
all
of
them
because
they
lost
them.
Everything
I
think
they
lost
like
pretty
much
everything
improvised.
A
So
it
was
possible
if
there
was
something
special
about
one
of
the
ones
had
before
that
it
could
have
been
missing.
So
if
there
is
some
weird
thing
where
some
packages
failing
to
install,
then
it's
possible
against
anything
like
that.
Even.
A
B
A
A
There's
a
few
things
like
that
I
said
three
and
four
kind
of
look
like
that.
Maybe
two
as
well,
where
it
could
just
be
some
strange
little
infra
thing,
but
we
have
to
track
them
at
least
because,
if
they're
consistent,
then
I
guess
we'll
have
to
figure
out
what's
going
on
and
try
to
at
least
help
people
who
work
on
that
stuff
know
what
the
problem
is.
A
So,
as
we
do
more
runs
well,
I
guess
we'll
see
for
two
three
and
four:
if
they're
actually
going
to
be
a
consistent
problem
or
not
because
I
think
again,
there's
already
been
a
few
things
have
been
changed
in
the
infra.
Since
this
status,
that's
run,
run
I'm
planning
to
do
another
one
today,
I
have
a
build.
It's
in
progress
It's
so
far
green,
including
the
Centos
8
default
x86,
so
you'll
be
able
to
be
good
to
go
later,
a
little
bit
to
see
another
test
run.
A
Let's
see,
then
five
and
six
are
ones
I've.
Definitely
seen
before,
like
I
already
told,
mentioned
five,
it's
just
a
problem
with
the
staggered
upgrade
since
we
it's
an
app
that
rebumped
the
starting
version
from
octopus
specific
when
they
moved
to
v18
and
it
was
broken.
It
was
already
a
pull
request
up
to
fix
it
that
obviously
wasn't
included
in
the
Run,
so
I
think
five
should
be
fine.
A
The
Essie
links
and
I
also
tests
that
vdm
task,
those
those
were
there
before
they
they're
a
bit
annoying
because
they
don't
happen
every
time.
But
it's
always
on
this
this
test,
where
they
do
happen,
if
they
are
going
to
happen,
but
I
do
remember
those
being
there
before.
A
So
that's,
not
a
new
thing,
I
think
I
hadn't
quite
been
able
to
figure
out
what
was
causing
that
again,
because
it
doesn't
happen
every
time
and
usually,
if
that's
the
only
thing
that
failed
in
the
test
I,
you
know
you
know
it
probably
didn't
really
didn't
break
anything
if
it's
doing
that,
so
I've
been
kind
of
ignoring
those
ones
a
little
bit.
So
five
and
six
shouldn't
be
a
big
deal
and
then
seven.
This
was
a
one-off.
A
Only
one
test
failed
like
this,
but
it
was
just
we
deployed
ffs
mirror
and
then
it
just
timed
out
waiting
for
it
to
show
up
in
the
orange
PS
and
I'm,
not
sure
exactly.
Why
I
think
that
test
passed
a
few
other
times
as
well
go
check
real,
quick.
A
Yeah,
if
it
passed
four
other
times
and
just
failed
once
with
this,
so
I'm
not
certainly
sure
what
happened
there.
Maybe
it's
a
test
set
again,
it's
just
sort
of
like
you
know.
We
had
that
with
some
of
the
NFS
stuff
before
where
some
of
the
timings
made
it
fail
very
rarely
like
one
out
of
five
one
of
ten
times
it
could
be
one
of
those
some
people
have
to
actually
fix
here,
but
not
a
huge
deal,
I'm
less
worried
about
that
one.
A
When
the
five
six
and
seven
those
ones
are
a
bit
more
manageable.
I
have
to
see
what
happens,
they're
more
normal
failures,
I
guess,
I
put
them
in
the
three
different
categories.
Here:
five,
six
and
seven
are
like
actual
sort
of
stuff
things
or
things.
Our
test
is
doing
that
we
can
sort
of
deal
with
that
either
were
there
before
or
the
more
normal
types
of
failures.
A
Two
three
and
four
are
strange,
possibly
infer
related
things
that
we'll
have
to
or
to
see,
if
they're,
consistent
or
not,
and
then
one
we
we're
just.
This
is
a
consistent
mounting.
These
things
doesn't
work
at
all.
Maybe
it's
a
problem
with
with
the
image
they
have
up
there.
Maybe
it's
you
know,
something's
not
set
up
properly.
I
will
have
to
get
that
one
fixed,
because
right
now,
all
of
our
NFS
and
grits
tests
are
failing.
A
Yeah
I,
just
I
have
a
build
I'm
gonna
do
another
run.
If
the
failures
are
only
these
failures,
I'm
gonna
probably
be
merging.
The
pull
requests
that
are
tagged,
I
mean
they're.
All
I
didn't
look
through
them
all
to
see
if
they're
all
approved
properly
I'm,
assuming
they're
all
approved
here,
I'll
link
the
list
of
tagged
ones.
A
It's
one
of
mine
or
there's
other
two
they're,
both
mine.
My
four
requests.
A
One
of
them
just
hasn't
been
reviewed
at
all.
It
looks
like
and
that's
the
handling
of
the
managers
as
a
one-liner,
then
there's
migration,
current
one
which,
let's
say
was
reviewed,
I,
think
I
addressed
these
comments,
this
one's
pretty
old
yeah.
So
if
anyone
does
have
a
little
bit
of
time
review,
maybe
those
two
I
think
everything
else
in
here
is
approved
and
I'm
willing
to
merge
these.
A
A
Yes,
that's
just
my
with
adk
testing
tags
and
I
think
it
also
has
a
couple
other
things
like
if
they're
all
to
be
open
or
whatever
I
could
link
I,
don't
know
what
this
dock
I
don't
know
if
I
should
just
ones
because
I
would
have
to
share
University
I
guess
if
I
wanted
to
put
it
in
and
then
right
now
it's
just
pull
in
the
downstream
Google
Chat
and,
like
you
can
see
this.
A
A
Yeah,
that's
to
say
to
the
testing
stuff
that
if
we
fix
the
the
mounts
thing
then
Rock
you've
been
a
pretty
good
spot.
I
think
some
other
issues
will
go
on
their
own,
like
the
fifth
one,
when
we
have
a
handful
of
tests
that
are
failing
at
that
point
and
the
ones
that
are
failing
that
are
left
after
that
aren't
even
like
a
test.
That's
filling
every
time.
It's
like
a
test
that
has
this
weird
sporadic
failure
where
it's
like.
A
Oh
I,
couldn't
find
this
file
here
open
this
all
this
package
and
it's
not
even
the
same
test
every
time
for
some
of
them.
So
if
we
can
get
that
yeah,
the
first
one
fixed,
then
we'll
be
in
a
good
spot
and
I
said
I'm
willing
to
even
merge
full
requests
with
that
still
broken,
because
we've
been
waiting
so
long,
we
should
be
able
to
get
stuff
back
in
track
over
the
next
couple
weeks
before
the
holidays.
Hopefully,.
B
Okay
at
the
afternoon,
but
if
you're
still
looking
for
help
on
the
NFS
thing
like
tomorrow
or
Thursday
ping,
me
I
will
probably
need
to
ask
you
questions
about
how
to
debug
it,
because
just
when
you
Google,
like
the
protocol,
not
supported
error,
you
see
things
like.
Oh
kernel,
module
doesn't
support,
you
know
NFS
V4,
it's
only
got
V3
or
what
or
vice
versa
or
things
like
DNS,
which,
if
the
lab
was
rebuilt
and
the
DNS
is
different.
B
Maybe
it's
that
so
not
sure
how
we'll
debug
all
these,
but
but
we
might
have
to
like
jump
on
some
of
these
notes
and
like
a
poke
Apple.
A
A
I'm
going
to
be
in
the
office
tomorrow,
if
you're
going
to
be
there
or
not
I
plan
on
it
yeah,
so
I
haven't
been
out
in
a
few
weeks
now,
but
I
was
planning
to
go
back
tomorrow.
Now
you
do
that,
and
I
also
need
to
do
an
expense
report
for
what
happens
or
the
trip
last
or
two
weeks
ago.
Now
so
I'll
be
doing
some
stuff
like
that.
B
Yeah
yeah
for
everything
goes
according
to
plan.
That
might
be
a
good,
a
good
thing
to
kind
of
get
together
and
do
that
tomorrow
and.
A
Hopefully,
if
the
test
do
isn't
too
blocked
up,
we
can
even
have
another
run
the
tag
pull
requests.
Maybe
we
can
see
if
some
of
these
failures
pop
up
again
as
well,
assuming
nothing
that
that
run
breaks
as
well.
It
could
be
possible
that
something
that
that's
tagged
there
is
actually
yeah.
B
A
A
Yeah
so
I'm
saying
I,
don't
know
it's
kind
of
tough
I
know
what
you
mean
like
I.
Don't
really
want
to
merge
it
because
it
does
affect
some
of
like
the
exports
and
stuff.
Yes,
we
do
have
again
some
other
NFS
tests
that
make
sure
we
can
deploy
it
and
things
like
that.
But
I
don't
know
if
they
really
use
the
stuff
that
your
PR
is
I.
Think
that
that
stuff
really.
B
A
And
that's
the
big
one:
if
we
can
fix
that
one,
then
everything
else
shouldn't
be
as
big
a
deal,
because
most
of
the
other
failures
are
pretty
sporadic.
So
you
should
block
any
testing
because
we'll
have
other
instances
of
the
tests
that
are
passing
and
things
we'll
know.
It's
just
a
weird
thing
with
the
like:
you
think
of
the
infra
or
whatever
that'll
be
good.
We
can
do
that.
B
Yeah
so
let's,
let's
plan
on
doing
one
of
these,
like,
like
you,
said,
like
freeze
the
nodes
when
the
test
fails
and
then
we
can
go
on,
we
can
look
at
like
the
D
message
from
the
kernel
or
you
know,
examine
the
state
of
the
Ganesha
server
Etc.
A
Yeah
sounds
like
a
good
idea:
I
think
we
could
I
think
kill
me
guns
on
that
before
as
well,
but
he's
gonna
let
the
key
message
on
certain
test
failures
and
why
things
are
stuck
yeah.
B
A
It's
only
for
things
that
are
really
tricky
that
I
ever
have
to
do
any
of
this
yeah,
but
it
can
be
kind
of
useful
not
to
be
able
to
like
the
Locking
those,
because,
even
if,
even
just
for
like
messing
with
it
outside
of
a
test,
if
we
can
just
lock
the
node
like
this,
is
the
image
that
they
deploy
when
they
run.
B
That's
why?
Because
I'm
I'm
concerned
that
the
issue
is
infrastructure
say
either
DNS
config
or
the
new
images
are
missing,
say,
although
current
necessary
kernel
modules
or
something
like
that
by
freezing
the
system,
and
then
we
can
look
at
it
like
you
know,
in
C2
or
whatever
the
right
term
is
just
say:
oh
okay,
the
reason
that
the
kernel
doesn't
Mount
is
because
it
can't
resolve
the
IP.
You
can't
resolve
the
or
can't
do
it
reverse:
DNS,
okay,
yeah.
A
We
can
go
run
through
the
test
interactively
if
we
have
to
but
I'm
guessing.
It's
going
to
be
that
we're
going
to
try
this
on
some
of
the
machines
and
is
we're
not
going
to
do
it
manually
either?
That's
what
I
assume
is
going
to
happen
we'll
have
to
see,
and
that's
even
assuming
that
they
haven't
done
something
since
this
test
fan
that
could
actually
fixed
it
or
something
because
I
assume,
if
other
people
are
testing
NFS
stuff
as
well.
A
B
A
A
A
I
see
figure
those
off
soon
yeah
anyway.
That
sounds
like
a
good
plan.
I'm
gonna,
look
at
the
NFS
stuff
tomorrow
and
the
other
ones
will
also
just
see
if
they
keep
popping
up
and
hopefully
over
the
next
week
or
two
we
can
get
because
there's
a
huge
backlog
of
stuff
to
get
merged.
There's
probably
like
25
34
questions.
A
All
right,
in
that
case,
we'll
call
it
there
and
I
guess
I'll,
see
you
all
next
week.