►
From YouTube: Ceph Developer Monthly 2021-03-03
A
All
right
looks
like
we
got
a
quick
worm
for
the
first
topic,
at
least.
Let's
click
let's
get
started,
so
this
is
welcome
to
the
cdm
for
march
2021..
I
start
off
with
kyle
vigor
here
I
want
to
talk
about
ways
to
make
stuff
easier
to
use
out
of
the
box.
C
So
I
guess
kind
of
the
the
primer
for
this.
It
was
you
know,
and
I
discussed
this
a
little
bit
at
the
performance
weekly
a
couple
weeks
back
was
we
did
a
bunch
of
of
like
testing
and
optimization
around
rgw.
That
kind
of
went
through
that
transcended.
The
kind
of
the
whole
stack
and
we
really
wanted
to
make
some
of
the
configuration
type
stuff
that
we
did
to
get
that
really
good
performance,
more
accessible
to
people
that
are
using
stuff,
and
so,
as
I
kind
of
mulled
over
it.
C
I
started
to
have
this
idea
of
having
almost
like
a
pre-canned
set,
erasure
coded.
You
know
ec
profiles,
so
you
know
we
already
have.
C
Ec
profiles
are
a
thing
and
rados,
and
so
it
would
be
kind
of
like
we
would
just
you
know,
have
a
set
that
already
exists
in
in
clusters
when
they
start
up,
and
then
they
could
just
have
pools,
use
those
profiles
instead
of
having
to
specify
you
know
their
own
profiles
and
then
building
on
that
that
we
could
pretend
potentially
have
rado's
clients
do
do
something
in
in
terms
of
having
different
kind
of
like
profiles
of
of
parameters
almost
almost
like
how
we
currently
have
for
different
storage
classes.
C
So
in
some
cases
like
we
have,
you
know
like
a
a
setting,
for
you
know
like
a
generic
setting,
then
we
have
a
hdd
or
an
ssd
setting,
but
almost
a
way
to
have
like
a
profile
of
settings,
for
that
would
be
applicable
to
particular
pools.
That
would
kind
of
like
be
almost
building
on
top
of
that.
C
C
That
rgw
would
use
for
tools
that
are
using
using
a
particular
erasure
coding
profile
and
so
that
you
know,
because
a
lot
of
these
things
are
there's
like
alignment
that
you
want
to
like
achieve
as
you
as
you
move
through
the
rados
client
down
to
rados
and
our
ratio
cutting.
So
you
know
you
kind
of
want
to
align
things
like
chunks,
chunk
and
stripe,
and
so
on
and
so
forth
sizes.
C
And
you
know
I
could
pull
up
some
diagrams
that
talk
about
the
you
know
the
different
different
chunk
and
stripe
and
window
sizes
that
are
up
at
the
rgw
layer
and
then
the
different
kind
of
ways
that
we
chop
up:
data
for
erasure
coding,
but
there's
kind
of
like,
instead
of
it
being
like.
If
people
go
and
try
to
adjust
the
defaults,
they
really
have
to
have
a
pretty
deep
understanding
of
multiple.
D
C
Of
the
stack
to
do
it
successfully,
so
this
would
kind
of
help,
I
think,
have
kind
of
like
some
pre-can
profiles
and
then
going
further.
You
know
potentially
having
some
sort
of
blue
store,
hints
or
pg
attributes
based
on
the
pool,
so
you
can
also
have
like
different
properties
for
blue
store
down
at
the
lowest
level.
So
you
know
you
can
probably
have
blue
store
optimize
for
particular
pools
around.
You
know
the
way
that
it
does
check
something
or
the
blob
sizes
that
it
allocates
so
yeah.
C
That's
kind
of
the
premise
of
of
of
you
know
kind
of
the
what
I
tried
to
capture
in
the
ether
pad,
but
I'd
be
kind
of
interested
in
an
open
discussion
around
that
kind
of
idea.
A
First,
I
guess
how
much
do
those
do
these
depend
on
like
the
use
case,
like
the
striped
unit,
for
example,
it
seems
like
it
could
potentially
vary
a
bit
if
you
have
like
a
wider
range
of
object
sizes,
you
don't
want
to
be
like
have
a
bunch
of
extra
padding
added
when
you
have
small
objects.
C
Right
so
the
one
of
the
the
downsides
of
having
more
aggressive
stripe
units
is.
Is
that,
with
the
way
that
our
ec
currently
works?
Is
you
pad
up
to
the
stripe
width,
and
so
you
potentially
have
space,
and
I
o
amplification?
C
If,
if
you
have
like,
I
think
our
default
stripe
unit
is
something
on
the
order
of
4k.
So
you
know,
perhaps
the
values
that
I
have
in
here
are
are
maybe
not
great
for
like
like
a
general
setting,
but
I
think
that
that
there
are
probably
like
pre-can
profiles
that
we
could
put
in
there.
C
That
would
be
sensible
for
different
use
cases,
and
you
know
having
having
the
ability
to
have
the
cluster
start
up
with
you
know
some
sort
of
reasonable
set
of
ratio
profiles
seems
like
you
know
they
could
be
defined.
I
guess
somehow,
but
seems
like
it
could
be
useful
and
what
the
exact
values
you
set
for
the
stripe
unit
and
the
k
and
m.
Probably
you
know
we'll-
have
to
have
discussion
around
that,
but.
A
Yeah,
I
think
that
makes
a
lot
of
sense
to
have
those
available
in
general.
It's
just
these
kind
of
very
common
configurations
that
people
use.
You
don't
want
to
force
them
to
recreate
those
themselves.
Each
time
they
install
stuff.
F
It
seems
like
we
need
some
sort
of
matrix
right
I
mean
like
we
do
have
applications
like
currently
pools
have
applications
that
you
can
enable.
Now
it
goes
down
to
the
level
of
like
you
know
if
you
are
running
an
rgw
application,
what
kind
of
matrix
is
available
to
you?
What
kind
of
workload
are
you
running?
F
So
we
need
to
kind
of
define
high
level
things
like
you
know,
if,
if
we
know
or
some
user
knows,
that
their
objects
are
going
to
be
really
small,
which
profile
they
should
be
using,
so
we
need
to
come
up
with
those
broad
categories
in
general.
What
I
think.
A
I
wonder
if
you
could
kind
of
make
some
of
that
automatic
too,
like
by
using
different
pools
or
for
different
sizes
of
objects
or
packing
things
together
more
to
create
larger
radius
objects.
C
Yeah
I
mean
that's
like
like
object.
Packing,
I
think
into
is
something
that
we've
you
know
had
had
this.
You
know
discussions
too.
D
That
yeah
that's
been
coming
up
in
the
road
map
considerably
in
the
last
in
the
last
three
weeks.
I
think
we're
gonna
revisit
this,
but
it's
it's
not
necessarily
a
faster
odder
than
other
things.
There
are
some
areas.
I
don't
want
to
kind
of
go
into
here,
but
I
think
yeah.
Our
wrw
can
steer
this
to
appropriate
resources,
but
from
like
the
radius
level
below
what
what
is
the
right
smorgasbord
of
options
to
give
the
application
like
rgw,
when
it
knows
what
strategies
to.
C
C
Yeah,
so
I
I
guess
kind
of
the
idea
of
starting
with
our
erasure
code
profile
was
kind
of
because
it
was
already
a
thing
and
it
seemed
like
it
was
probably
the
lightest
weight
thing
to
to
to
at
like
preload.
Some
influenced
by
you
know,
sort
of
testing
that
that
we
see
people
talking
about
like
performance
weeklies,
and
you
know
various
teams
and
stuff.
A
Yeah
those
ones
definitely
seem
very
clear
in
terms
of
like
what
commonly
used
kind
of
k.
N
values
are.
I
think,
there's
not
a
lot
argument
about
the
about
those.
I
guess
what
I'm
less
clear
on
would
be
the
different
kinds
of
like
optimized
optimizations.
You
want
to
make
for
different
sorts
of
applications.
A
Is
it
purely
about
the
these?
Like
object,
sizes
and
rgw
aligned
object
sizes
with
the
underlying
radius
object,
sizes.
C
C
B
B
C
G
H
B
H
C
H
H
D
H
D
Is
there
anything
you
want
to
do
there,
because?
Because
because
I
mean
this
was
extreme,
this
the
span
of
I
mean
this
bandwidth
workflow
that
gave
caleb
right.
That
kyle
was
working
on
for
you'll,
see
it
was
extreme,
but
it
that
everything
depended
on
getting
every
every
piece
aligned
in
the
stack
and
and
four
meg
was
like
a
magic
number
for
it.
How
much
are
we
able
to
get
in
the
future
if
we
I'll
actually
care.
C
Going
all
the
way
down
through
the
stack
right
so
like
and
and
that
meant
that
the
the
blue
store
was
doing
four
megabyte
ios
and
and
like
the
memory
allocations,
were
such
that
it
actually
ended
up
with
as
a
single
bio
when
it
was
issued
down
to
the
block
like
raw
block
device,
and
we
got
and
we
got
that
with.
Like
some.
C
You
know,
tweaks
to
blue
store
and
stuff
like
that,
and
we
saw
a
really
good
performance
but
to
to
make
that
more
accessible
like
like
and-
and
I
think
I
like-
I'm
in
a
complete
agreement
with
you.
Sam
is-
is
that,
like
you
know,
I
guess
we
could
just
inspect
the
k
value
and
then
dynamically
adjust
the
various.
C
C
Rgw
parameter
bundle
that
that
you
could
specify
and
then
and
then
also
have
some
sort
of
like
hinting
or
pg
annotation,
so
that
that
the
the
when
blue
store
was
doing
allocations
for
that
particular
pool
that
it
could
use,
perhaps
different
settings
than
it
would
do
use
for
other
pools.
Because
that
one
pool
is
gonna.
C
Have
you
know
immutable
objects
and
they're,
because
rgw
or
you
know
whatever
they're
always
going
to
be
and
be
because
you
know,
j
erasure
is
going
to
zero
pad
the
ec
stripe
that
that
you
know
the
I
o,
the
the
you
know
the
the
size
of
the
thing
that's
going
to
go
down.
A
blue
store
is
going
to
be
consistently
sized,
and
so
you
can
basically
just
have
like
a
fixed
allocation
for
buffers
and
for
space
allocations
from
the
block
device.
A
So
here
for
the
hints
brand
for
aspects
we
already
have
this
like
radio
stand
for
immutable,
that
this
is
going
to
be
immutable.
We
have
this
concept
of
like
setting
you.
I
got
other
kinds
of
flags
along
with
it
whenever
we
could
just
use
those
existing
flags
and
to
apply
this
kind
of
tuning
of
like
aligning
things
to
that.
That's
at
the
the
size
of
that
right.
C
I
think
so
I
I
actually
think
that
gets
us
most
of
the
way
there.
I
I
think
the
only
maybe
remaining
thing
is
that
that
we
need
to.
We
need
some
sort
of
size,
hint
or
annotation
on
the
pg,
so
that
we
know
what
we
expect
the
the
ios
going
down.
The
blue
store
to
be
so
that
we
could
allocate,
and
we
can.
We
can
size
yeah.
C
And
the
allocations
from
the
actual
block
device
in
almost
like
fixed
size,
chunks
for
that
particular
pool,
even
though
it
might
not
be
like
that,
might
be
a
completely
unreasonable
strategy
generally.
But
for
that
particular
pool
it
may
it
may
be
completely
sensible
and
and
beneficial.
A
C
C
H
A
That's
probably
worth
discussing
this
more
with
the
igor,
adam
blitzer
developers.
C
C
I
I
guess
I
I
guess
at
the
the
radius
application
side
right
like
like,
I
was
you
know
like
we
could.
We
could
just
make
rgw,
smarter
and
either
look
at
ec
profiles
by
you
know,
name
and
have
you
know
some
pre-canned
ones
or
we
could
have
them
actually
look
into
the
actual
settings
that
are
set
within
the
the
ec
profile
and
try
to
do
something
smart
they're
using
some
sort
of
you
know.
C
You
know
heuristic
or
algorithm
or
something,
but
I
I
I
think
it
could
potentially
be
more
generally
useful
than
just
rgw
where,
where
you
have
some
some,
you
know
package
of
parameters
that
you
want
to
set
on
a
per
pool
basis
or
pull
out
per
application
basis.
C
Somehow-
and
you
know-
and
maybe
it's
maybe
similar
things
like
the
striping
configuration
for
you-
know
cfs
or
rbd
or
or
whatever,
but
you
know
I
I
wouldn't
want
to
limit
ourselves
like
like
it
seems
to
be
more
generally
useful
than
that
and
and
and
we
might
not
be
able
to
consider
all
the
different
parameters
that
might
be
useful.
C
So
if
we
had
the
ability
to
create
like
like
profiles
of
of
like
seph.com
parameters,
they
get
applied
to
particular
pools
based
on
some
sort
of
something,
whether
it
be
an
ec
profile,
or
I
don't
know
whatever.
B
B
C
B
D
H
D
I
respect
what
you're
saying
I'm
saying:
if
we
have,
we
have
a
way
of
saying
it.
I
mean
what
greg
is
saying
is
sensible
too.
We
will
make
it
make
it
visible,
but
but
but
it
sounds
like
if
we
have
several
choices,
teach
teach
teach
us
how
to
work
to
the
middle.
You
know
if
they
always
know
how
to
do
various
things.
We
know
when
we
know
it
like
like
this.
This
workload
was
crazy.
D
I
mean
the
the
the
idea
was
that
and
it's
not
bad,
but
it
was
like
this
is
like
a
maximum
bandwidth
workload,
maximum
effort,
bandwidth,
nothing
else,
matters,
the
the
everything
is
for
meg
blind,
reads
rights
and
they,
but
they,
but
because
of
that
they
expect
to
get
fifteen
percent
optimality
within
the
area
right
where
there
was
a
prescription
for
the
winning
fifty
percent
optimal.
D
But
we
got
pretty
close.
We
got
within
like
two
percent
of
that,
so
because,
because
of
the
you
know,
that's
that's
all
this
teaches
us,
but
but
it's
cool
it
doesn't
just
how
to
give
a
general
purpose
environment
network,
but
that's
where
things
get
more
interesting,
but
we
still.
But
if
what
if
we
want
to
capture
this,
then
what
should
we
do.
F
I
I
think,
going
back
to
kyle's
point:
we've
done
something
very
similar
for
our
dm
clock
profiles.
We
have
something
called
config
set,
it's
not
applied
on
a
pool
basis,
but
it's
like
a
setting.
What
we
do
is
that
we
want
to
optimize
for
client,
I
o
and
the
the
user.
Just
sets
that,
and
we
have
a
bunch
of
you
know
cefcon
settings
which
were
earlier
separate
settings
under
the
hood
they
get
set
automatically
based
on
whatever
performance,
testing
you're
doing
and
whatever
we
think
should.
F
You
know,
come
with
that
profile,
so
it's
almost
going
to
be
similar.
I
mean
it's,
it's
just
the
just
con.
The
concept
of
application
already
exists.
We
extend
that
under
the
hood
to
say:
okay,
this
application
optimizes
for
x
y
z.
You
know
similar
things
under
the
hood.
We
need
to
you
know,
set
those
stripe
sizes
stripe
widths,
whatever
we
want
to
do
even
in
blue
store.
B
Yeah,
so
I
mean
there's,
there's
lots
of
heuristics,
we
could
apply,
but
it
sounds
like
we
got
to
find
at
least
one
really
good
set
for
a
particular
workload.
We
have
a
good
idea
what
they
look
like
for,
like
mixed
and
smaller
workouts.
So
you
just
say
like
look.
There
are
three
descriptors
for
possible
rtw.
I
o
patterns
pick
the
one,
that's
closest
and
then
like
we
set
it
up
when
you
when
you
create
that
pool
and
that's
like.
B
I
H
Do
these
settings
live,
that's
interesting
and
but
I
think
not
not
critical.
The
obvious
place
to
put
it
if
we
don't
want
to
do
anything
invasive
is
to
add
it
as
an
interface
in
the
dashboard
that
just
knows
how
to
talk
about
the
relevant
components.
So
I
claim
that
that
part's
trivially
solvable
the.
H
C
H
Mean
okay,
so
yes,
that
would
require
a
change
towards
uw
yeah,
so
which
is
my
second
piece.
The
second
piece
is
there
are
parameters
that
we
don't
currently
have
places
to
add
knobs,
for
one
of
them
is
that
rgw
needs
differential
per
pool
and
figs,
perhaps
another
is
that
blue
store
may
need
a
differential
for
pool
can
fix
those
can
either
be
accomplished
through
the
existing
hints
or
through
some
pool
parameter
doesn't
matter.
H
The
third
thing
is
what
defaults
are
actually
worth
presenting
to
people,
and
how
do
we
deal
with
situations
like
this
that
are
truly
exceptional,
so
if
you
attack
them
separately?
The
last
case,
I
think,
is
you
present
defaults
that
are
sane
for
real
workloads.
That
really
happen
with
an
escape
hatch
for
an
arbitrary
yaml
that
describes
all
of
the
relevant
knobs
that
you
can
then
just
just
distribute.
If
you
need
to
do
something
clever,
so.
J
H
That
talks
about
case
one,
which
is
where
do
we
store
it?
If
you
really
want
to
yeah,
you
can
put
in
the
ost
map,
if
that
makes
you
happy,
but
it
doesn't
actually
matter
all
that
matters
is
that
you
have.
You
have
a
common
config
language
that
knows
how
to
touch
the
relevant
pieces
for
the
config
set.
I
don't
think
it's
actually
important
how
you
do
it.
C
H
B
B
Yeah
I
mean
most
most
of
what
you're
describing
sounds
like
it
sounds
like
either
it's
just
defining
the
like
the
basic
setups
that
we
want
to
offer
like,
like
the
three
or
whatever,
and
then
getting
our
rgw
to
respect
the
fact
that
there
might
be
more
than
one
of
the
big
access
to
the
time
and
that's
that'll
require
gw
modifications
with
like
code.
B
D
B
B
B
Because
there
are
a
few
different
ways
that
could
happen,
and
I
think
whichever
one
rgw
wants
is
probably
going
to
be
the
solution
that
cephamus
wants,
but
subvest
will
have
a
much
easier
time,
adapting
it
to
it,
because
we
already
do
a
bunch
of
stuff
per
file
that
rgw
doesn't
because,
like
like
files
can
have
can
have
variable
layouts,
already
and
stuff
of
us
and
stuff,
like
that.
That's
all
plumbed
through
the
stack
already-
and
I
don't
know
about
it-.
B
Yeah
and
and
rvd
has
like
support
cc,
but
it's
just
maybe
I'm
wrong,
but
I
get
the
impression:
that's
just
it's
so
much
less
interesting,
because
no
one
who
cares
about
their
performance
is
ever
going
to
use
it.
C
Yeah,
I
I
do
I
do
see
it
as
is,
is
very
you
know.
The
things
that
are
relevant
for
rgw
here
you
know
generally,
would
probably
be
useful
for
cfs,
where
you
have
different
directories,
that
that
map
to
different
data
pools
or
something,
and
you
might
want
different
parameter
buttons
for
them,
but
but
maybe.
A
B
I
think
the
only
thing
that
might
that
the
only
thing
that
would
be
weird
is
that
I
think
we
have
global,
read
ahead
and
write
ahead
values
but
like
we
already
support
randomly
sized
objects
with
random
striping
mute
boundaries
for
for
fun
of
profile
basis,
and
all
that
so
instead
ffs,
where
you
just
like,
specifically
yeah
yeah
and
stuff
yeah,
cause
you
just
shove
it
you
like,
stick
it
in
the
next
adder
on
the
file
and
it
and
then
it
just
accepts
it.
C
J
L
A
C
Not
completely,
I
I,
I
think,
we've
kind
of
agreed
that
they
would
be
useful
and
that
you
know
probably
we'll
we'll,
have
an
idea
of
a
few
that
will
be
reasonable.
Based
on
you
know,
common
like
right
here
I
have
the
you
know,
4283
right,
which
is
you
know,
at
least
in
the
case
of
you
know,
with
default,
4
megabyte
things
for
megawatt
striping
for
a
lot
of
things
makes
sense
right.
C
You
want
the
k
to
be
a
multiple
four
and
then
he
you
know
if,
if
we
had
other
k
values
that
weren't
multiples
of
four,
then
once
we
had
the
the
other
things
like
some
sort
of
you
know,
per
purple
set
of
chunking
and
striping
parameters,
then
we
could,
you
know,
detect
those
and
and
set
them
or
something.
C
I
don't
know
like
right
now,
right
like
like
just
starting
right
off
the
bat
we
probably
wouldn't
want
to
start
by
loading
up
a
4
2,
a
6
2,
an
8
3,
because
if
people
just
used
a
6
2
with
the
existing
of
other
defaults,
that
would
be
not
good
right
because
it
wouldn't
be
aligned.
It
would
be
four
megabyte.
Divided
by
six,
that
would
I.
J
Mean
it
seems
like
there's,
there's
the
part
where,
if
the,
if
there's
a
pool
that
has
a
given
ec
profile,
then
there
are
certain
ways
that
rgw
or
cefs
could
just
constrain
the
the
striping
behaviors
that
it
considers.
I
guess
based
on
that,
like
it's
always
going
to
be
one
of
going
to
be
a
multiple
of
like,
I
guess,
the
stripe
yeah,
but
the
stripe
type
width.
I
guess,
but
not,
but
it's
unclear,
necessarily
whether
you
want
to
have
multiple
stripes
in
the
same
object
like
how
big
the
object
should
be
all
right.
C
J
C
Yeah
right
so
so
like,
even
if
you
know
you
have
big
objects
but
you're,
you
set
your
stripe
with
to
say
like
in
my
test.
I
set
the
strike
up
to
24
right
meg's
and
that
that
was
big,
but
that
meant,
if
someone
writes
25
megs,
then
it
space
him.
There's
space
amp
up
to
48
megs,
which
is.
C
So
so
so
I'm
thinking
in
most
cases
we
like
like
lately
and
probably
a
lot
of
the
benefit
that
I
saw
was
probably
not
so
much
from
playing
with
the
stripe
stuff
but
more
about
getting
k,
aligned
with
the
various
chunking
and
striping
parameters
in
rgw
right.
So
that
seems
like
the
big
win,
not
the
yeah
playing
with
the
stripes,
though.
J
A
I've
been
having
rgw
place
things
differently
or
depending
on
their
size
in
general,.
J
D
J
Of
things
like
you
could
imagine
the
manifest
the
main
data
pool
being
like
you
know,
replicated
3x
replicated
and
with
like
reasonable
settings
or
whatever,
but
then
having
all
these
ec
pools
that
are
optimized
for,
like
very
large
objects
or
like
medium
objects
and
then
based
on
the
content
length.
You
just
choose
where
it's
going
to
go.
D
Each
part
has
its
own:
is
its
own
array
of
of
stride
of
stripe
of
stripe,
set.
A
D
C
I
think
I
think,
like
the
default
for
like
the
s3a
file
system.
Client
will
be
to
you
know.
If
the
object
being
written
is
larger
than
64
megs,
then
it
will.
Then
it
will
use
multi-part
and
by
default
it'll
use
32
mb
parts,
so
you
know
to
write
out
a
full
32-make
part,
and
then
you
know
they're.
The
the
last
part
could
potentially
be
fractional
right
and
then
it'll
do
a
commit
at
the
end
and
usually
usually
it
uploads
a
bunch
of
parts
and
then
it
will
do
a
whole
series
of
commits
together.
D
C
Well,
you
know
what
would
be
useful,
you
know,
would
you
know
going
beyond
just
dumping
stuff
into
an
ether
pad
on
you
know
on
my
behalf.
What
you
know
is:
are
there
things
that
I
could
do
around
tracker
issues
or
something
or
or
trying
to
you
know
better,
articulate
individual
tasks
or
something
here
that
that
would
be
beneficial?
I'm
probably
not
going
to
be.
C
F
Can
you
just
start
using
our
backlog,
trello
and
create
separate
cards?
Okay,
there's
a
link
to
it.
J
Yeah
okay
yeah,
like
maybe
making
sure
that
the
ec
parameters
translate
into
hints
proper
hints
for,
like
blue
store,
check
some
unit
sizes
or
something
like
that,
like
those
sorts
of
things,
would
be
but
yeah
yeah,
yeah.
C
Right,
right
and
well,
and
and
and
even
some
of
the
stuff
that
I
was
thinking
like
what
blue
store
is,
is
for
particular
pools,
and
I
guess
we
kind
of
have
this
functionality
right
now.
But
it's
it's
kind
of
mixed.
It
is
where
you're
able
to
like,
perhaps
record
like
like
for
perhaps
still
have
checksums
on
blue
store,
so
that
you
can
use
them
for
scrubs,
but
that
they're
they're
disabled,
but
only
on
a
particular
pool
basis
for
verifying
on
reed,
and
I
I
think
it
seemed
like
that
should
be
supported.
J
J
J
But
yeah
making
sure
like
the
chunk
sizes
line
up.
That's
something
that
definitely,
I
guess.
C
Because
some
applications
are
gonna
are
gonna,
are
gonna,
do
its
own
checks
on
validation
and
read
of
checksums
that
are
inside
of
the
actual
object.
So
it's
not
necessarily
useful
to
be
doing
that
in
two
places.
It's
just
additional
overhead,
but
you
still
want
to.
You
still
want
to
have
the
checksums
there
so
that
you
can
use
them
for
scrub
and
that
sort
of
thing.
A
Yeah
yeah,
I
think
that
would
take
some
modification
to
the
way
scrub
works,
because,
right
now
it
just
literally
reads
the
objects
and
purple
store,
allows
them
to
check
something
behavior
to
match
to
verify
that
that
internal
consistency.
J
J
A
A
A
The
first
part
is
when
kind
of
optimizing
machine
time
and
how
we're
how
we're
using
lab
for
tests
and
we're
going
to
do
a
summer
projects
around
analyzing.
Some
of
this
I
want
to
kind
of
brainstorm
which
kinds
of
things
we
should
be
looking
at
to
in
this
process
looks
like
a
couple.
Entry
folks
have
already
added
some
ideas
to
the
ether
pad
here.
A
Let's
third,
do
this
there's
a
there's,
a
few
different
categories
of
things?
One
aspect
is
looking
at
the
actual
execution
of
the
tests
and
improving
how
how
those
are
executed.
A
Another
aspect
is
looking
at
the
coverage
in
the
suites
and
seeing
how
much
of
that
is
historically
been
most
useful
versus
what
always
kind
of
fails
at
the
same
time,
or
it
doesn't
cover
the
same
kind
of
code
paths
or
different
code
fast
from
each.
A
A
Because
I
consider
some
higher
level
improvements
in
how
the
tests
are
run
like
say,
the
other.
Some
portion
of
the
test
always
involves
setting
up
accommodating
a
node
and
installing
all
the
packages,
even
with
fadm
we're
installing
all
the
packages
so
that
we
can
bring
all
the
clients
side
tests.
So
if
we
were
able
to
model
all
the
client
workloads
within
the
containers
as
well
and
get
rid
of
that
stuff,
and
it
could
potentially
be
bundling
tests
together
more
to
to
avoid
the
insulation
and
re-imaging
overhead
for
each
individual.
A
A
I've
already
had
some
some
information,
I'm
going
in
here
brandon
and
decreasing
the
amount
of
logs
we're
collecting.
I
think
that
jason
just
had
a
pr
today
to
do
that
for
all
the
non-video
speeds,
reducing
the
blister
log
levels,
so
we
collect
them
in
memory,
but
don't
write
the
disk
and
therefore
don't
need
to
save
like
integrated
vlogs.
Every
time
we
have
running
queueing
express
tests.
So
that's
a
big
huge
improvement.
J
J
J
A
Yeah,
maybe
I
mean
I
think
if
we
were
would
be
worth
looking
at
if
we
could
reduce
that
long,
full
run
time
for
because
there's
only
a
few
tests
in
the
first
place
before
trying
to
just
run
the
first
yeah.
J
J
J
E
J
Yeah
or
yeah
and
all
the
package
installs,
I
don't
know
if
they
could
be
applied
to
like
the
fog
image,
like
maybe
nightly
we
like
for
each
of
our
fog
targets
or
whatever
we
just
like
re-run
ansible,
and
so
we
have
a
freshly
updated
date.
F
One
thing
that
can
be
useful
is
if
we
could
time
each
task,
how
much
task
yeah
yeah
each
task
is
taken.
A
There's
a
title:
that's
within
with
that's
a
part
of
every
the
results
of
actually
every
job
that
has
the
time
each
time
it
has
took
so
like
part
of
this
project
could
be
the
summary
going
through
and
analyzing
the
historically.
What
and
if
you
see
patterns
there.
A
J
It
would
be
really
nice
to
have
the
thing
where
technology
you
can
schedule
a
job
before
the
build
is
done.
A
F
Yeah,
I
was
just
thinking
with
our
debt
job
issues
and
stuff
now
almost
resolved.
Should
we
just
reduce
our
time
out
for
pathology
jobs
from
12
hours
to
like
something
like
six
seven
or
like
start
with
something.
A
So
with
the
with
the
coverage
piece,
I
think
the
same
was
talking
to
you
a
little
bit
about
this
phone
before
trying
to
look
at
the
historical
facets
that
had
failed
together
or
not
to
see
if
they
would,
they
were
kind
of
highly
correlated
versus
or
not,
and
whether
they
were
if
they
were
weren't,
covering
like
different,
really
really
different
code
paths
was
worth
removing
them
or
maybe
combining
them
into
a
single
task,
or
something
like
that.
H
J
It's
just
too
many
yeah
I
mean
you
could
imagine
too.
If
you
understand
the
correlations,
then
you
could
like
have
all
the
jobs
that
you
start
be
uncorrelated,
and
so
you
have
sort
of
a
fast
fail,
precisely
behavior
or
something
and
then
yep,
and
then,
if
you
have
the
job
automatically
cancel
as
soon
as
it
hits.
H
J
Is
that
that
the
sentry,
not
the
century,
the
paddles
database
is
it?
Does
it
get
pruned
or
does
that
go
back.
H
We
could
also
mark
assets
not
essential
the
way
the
subset
code
works.
Is
it
ensures
that
any
that
every
facet
option
shows
up
at
least
once
along
with
a
few
other
constraints?
So
if
we
could
mark
some
of
those
facets
non-essential
that
that
would
probably
cut
us
down
by
a
substantial
factor
for
some
of
the
for
the
thrashing
suite,
for
instance,.
A
We
can
also
make
more
use
of
the
turning
turning
out
faster
from
my
a's,
like
I'm
multi,
taking
a
product
into
a
single
item
where
we
randomly
choose
one
within
that
directory,
regardless
of
which
subset
you're
running.
A
It
means
that
it's
not
included
in
the
product
matrix,
it's
as
if
each
structure
is
only
one
thing,
instead
of
like
three
things,
for
example,.
A
A
Another
aspect
of
this
that
jason
brought
up
before
was
trying
to
look
at
how
many,
how
much
machine
time
we're
using
for
different
different
suites.
That
kind
of
leads
into
the
lci
discussion,
as
if
we're
wearing
like
tons
and
tons
of
videos
all
the
time,
using
up
a
lot
of
machine
time.
That
makes
sense
to
try
to
do
more
batching
together
of
things.
A
A
So
moving
on
to
the
the
build
mcis
for
its
because
he's
already
been
working
on,
like
automating
the
workflow
of
creating
creating
a
batch
of
a
branch
and
and
running
through
technology
with
github
actions,
I
think
that'll
be
a
big
step
towards
being
able
to
have
this
whole
pipeline
and
more
automated
and
more
or
more
efficient
might
not
be.
A
That
might
be
that
we
can't
use
very
large
batches
to
start
with,
but
maybe,
as
we
kind
of
stabilize
those
test,
suites
more
and
reduce
the
and
false
positive
rates
we
can
get
to
being
able
to
run
budget
matches
with
the
higher
probabilities
of.
A
A
That
side
of
this
is
for
intensive
it's
at
the
development
speed,
so
the
latency
to
be
able
to
run
a
test
just
being
able
to
like
get
packages
out
of
the
build
system
as
quickly
as
possible
work.
I
even
prefer
a
local
build
speeding
that
up.
A
I
think,
there's
a
bunch
of
different
things
that
you
could
do
there
and
I'm
not
too
familiar
with
all
of
myself,
but
thank
you
maybe
you've
been
running
with
like
ninja,
for
as
a
build
tool
for
for
some
time.
A
A
Sorry,
I
miss
your
your
question.
I
was
wondering
thank
you.
You've
used
ninja
for
available
for
some
time.
What's
been
your
experience
with
it.
N
N
N
So
in
theory,
ninja
is
much
faster
because
what
it
do
is
just
run
the
job,
it's
a
just,
a
bad
batching
tool,
so
it
is
much
faster.
Also,
according
to
the
website
of
ninja,
it's
it's
claimed
that
it
is
much
faster
than
that
make.
I
think,
that's
why
I
I
switch
over
to
make
and
and
never
turn
back.
Oh
sorry,
I
switch
over
to
ninja
and
never
turn
back.
J
A
Yep
for
sure,
there's
a
there's.
Another
concept
of
unity
builds
where
you
try
to
match
things
like
that
source
code
and
headers
files
together,
much
more
to
avoid.
A
Everyone
exactly
that's
kind
of
what
you're
describing
it's
of
kind
of
work,
to
make
sure
that
that
that's
correct
there
is
an
option
to
see
make
to
just
try
to
try
to
do
that
automatically.
A
I
think
a
duplicate
another
idea
about
trying
to
do
more,
like
pre-built
sub
modules
like
we
do
for
boost,
or
for
and
good
the
debian
builds
at
least
or
just
using
a
pre-built
target
of
that.
Instead
of
having
to
be
compiled
at
every
single
build
it's
a
lot
of
the
submitters.
We
don't
really
change
that.
Often.
J
J
B
B
A
Yeah
for
sure,
for
sure
greatness.
I've
started
this
just
this
pad
for
about
as
a
kind
of
gathering
place
for
ideas,
but
maybe
it
could
be
different
projects
or.
A
J
J
But
it's
like
a
bunch
of
scripts
that
like
run
against
that
redmine
api
and
I
think,
unless
you're
familiar
with
them,
if
I'm
doing
a
back
port
manually,
then
it's
never
clear
like
how
to
interact
with
that.
There
is,
I
wonder
if
there's
opportunity
to
like,
maybe
it's
just
and
to
learn.
We
need
to
train
other
developers
to
make
use
of
the
tools
properly
or,
if
there's
a
way
to
integrate
it
with
like
github
actions,
or
I
don't
know
something
to
like
streamline
that
workflow.
J
A
Yeah,
I
think,
there's
some
good
good
documentation
about
those
scripts
in
the
the
stable
releases
wiki.
They
also.
J
Because
it's
always
like
I'm
gonna,
I
know
I
need
to
backboard
something,
and
so
I
like
merge
it
and
then
I
go
and
create
the
back
pull
request.
But
then
it's
just
there's
no
tracker
issue
that
it
matches
to
it's
whatever
it's
just
like.
It
doesn't
yeah.
F
I
didn't
see:
there's
a
backboard
bot
that
got
created.
I
don't
know
who
added
it,
but
that
essentially.
F
L
A
Triggered
manually-
or
I
think
it's
automated,
I'm
not
sure
by
what
just
like
running
every
hour
or
every
day
or
what
but.
F
Yeah
because
I
mean
I
had
the
same
issues
that
it
was
saying,
but
at
some
point
I
think
nathan
shared
some
documentation
or
wrote
some
documentation
about
this,
which
was
super
clear,
but
the
only
thing
is
they're
two
separate
scripts,
and
you
need
that
initial
setup
that
you
need
to
do
with
redmine
and
you
know
ssh
keys
and
all
that
kind
of
stuff.
But
you
you
still
have
the
issue
of
running
two
separate
scripts,
so
maybe
eliminating
one
script
and
having
that
back
put
what
just
you
know
do
something
automatically
something
is
merged.
F
J
M
I
think
we
can
also
read
through
the
comments
like
we
generally
add
the
lead
mind
tracker
to
our
comment
and
from
the
comment
using
github
actions.
We
can
read
the
comment
messages
as
well
and
using
that
it
can
automatically
update
the
tracker
issue
that
this
pr
is
in
progress,
so
even
for
masters
master
appears
that
be
a
huge
workload
reduction
for
someone
who's
reviewing
yours.
A
A
J
A
Any
other
ideas
or
things
I
want
to
talk
about
around
mathematician,
builds
and
tests.
A
J
General,
I
guess
the
last
thing
is.
I
noticed
I
noticed
that
that
the
api
tests,
github
action
takes
forever.
I
went
and
I
peeked
at
it,
and
it
was
compiling
all
the
code,
including
the
stuff
that
wasn't
needed
for
the
to
actually
run
the
tests.
I
wonder
if
it's
possible
to
like
chain
things
so
that
there's
like
one
action
that
will
actually
do
the
build
and
then
a
second
one
that
does
mix
check
based
on
that
artifact,
another
one
that
does
the
dashboard
api
test,
based
on
that
artifact
or
whatever
so
they're
like.
A
Yeah,
I
think
ernesto
was
looking
at
that.
I
think
it
was.
I
forgot
how
far
he
got.
I
think
that
might
not
have
been
completely
finished,
but
he
already
got
at
least
started
investigating
how
that
would
be
feasible.
E
Yeah,
I
think
it
was
always
possible
to
do
with
the
pipelines.
It
was
the
question
of.
Will
there
be
a
time
savings?
There's,
so
many
cost
savings
right.
You
need
to
especially.
G
E
You're,
taking
like
gigabytes
of
you,
know,
rpms
or
whatever
having
them
transfer
them
to
another
host
to
you
know,
then
execute
the
next
step
in
the
in
the
jenkins
job.
It's
like
there's
that
process
of
uploading
it
somewhere
to
an
archive
and
then
pulling
it
down
on
the
next
jenkins
flake
builder.
J
E
But
but
if
it
is
if,
but
if
there
is
an
issue
where
it's
like
building
too
much,
then
maybe
yeah
so.
E
J
J
G
Sure
so,
right
now
the
dashboard
has
notification
bar
that
reminds
the
user
to
opt-in
and
once
they
click
that
little
x
on
that
notification,
they're
never
reminded
again
to
opt
in.
So
I
was
talking
with
ernesto
thinking.
G
What
would
be
the
right
exploration
for
that
cookie
that
never
bothers
the
user
again,
and
I
think
at
least
a
point
release
is
a
upgrade.
Is
a
good
excuse
to
nag
the
user
again,
but
that
might
be
too
long.
So
I'm
not
sure
what
the
sweet
spot
is
between
nagging
the
user
and
kindly
reminding
them.
J
I
mean,
I
think,
for
a
major
upgrade.
That's
definitely
a
fair
game,
good
time,
I'm
not
sure
about
point
releases
just
because
they
happen
more
frequently,
although
it's
probably
every
it's
like
every
two
months.
I
guess
I
don't
know
yeah.
B
G
We
had
some
mixed
feelings
about
that
as
well,
so
greg.
Would
you
have
any
other
ideas
that
I
could
expose
it
through
the
cli
for
users
who
do
not
use
the
dashboard.
B
Maybe
just
have
an
extra
output
when
you
run
status
that
says
hey
either
in
like
please
opt
in
or
else
turn
off.
This
warning
would
be
with
one
of
these
commands
because
yeah
it
can't
be
a
health
warning,
but
it
could
be
that
on
a
major
upgrade,
then
it's
like
uses
the
same
flag
as
a
dashboard
or
a
different
one.
That's
just
like
hey!
Please
turn
on
telemetry
by
running
this
command
or
opt
out
by
running
this.
L
J
I
have
the
results
from
the
latest
survey
here
I
don't
know,
can
you
guys
see
this
yep
so
about
necklaces,
like
orifice
people?
Have
it
enabled
for
all
or
some
other
clusters
well
over
half
have
it
on
none
of
them
and
the
reasons
why
they
don't
are
a
third
of
them
are
just
like
they
haven't
gotten
around
to
it
yet
and
a
third
say
that
their
cluster,
like
literally,
doesn't
have
access
to
internet.
J
So
you
can't
do
anything
about
these
ones.
These
ones,
like
we've,
been
just
nagging
them
and
reminding
them
every
time.
There's
a
release.
J
G
Yeah,
we
need
to
probably
send
a
to
the
mailing
lists
or
add
another
blog
post,
yeah.
A
You
could
do
a
tech
talk
going
through
some
interesting
info
in
the
dashboards.
B
B
L
J
G
Yeah
it'll
be
great
if
we
have
examples
of
how
that
was
actually
useful,
actually
useful,
yeah.
G
Yeah
and
about
the
the
fear
that
you
mentioned,
I,
I
guess
just
education
would
be
good,
but
it
goes.
I
mean
to
a
certain
length,
I
mean
it,
there
will
be
users,
that's.
B
Really
not
sure
we
can
do
anything
about
that
as
well,
except
for
the
actual,
like
data
sharing
contracts.
We
we
post
like
like
there's
a
there's,
a
lot
of
research
around
that
sociological
phenomenon
and
it's
basically,
you
have
to
give
them
a
reason
that
it's
good
for
them,
because
otherwise
they
just
don't
care.
F
Yeah
we
also
discussed
gathering
more
performance,
metrics
and
stuff
so
that
we
can,
you
know,
tune
our
some
of
the
blue
store
settings
and
things
better.
You
know
advertising
things
like
saving
memory,
saving
cpu,
all
that
kind
of
things.
Also.
We
will
draw
people.
A
It's
gonna
be
a
bit
of
a
longer
term
before
we
can
get
that
out
there
and
then
start
acting
on
it,
though
I
think
we
require
all
right.
We
can
advertise
fixing
some
of
these
bugs
that
are
we're
seeing
in
the
crashes
yeah.
F
A
A
Topic:
okay:
let's
go
into
the
tough
best
windows
support
then.
G
A
It's
lucy
here.
L
J
J
K
Okay,
so
here's
the
last
politicos
that
we
have
that'll
be
support
is
already
in
and
we
are
also
having
a
windows
janky's
job,
that's
testing
the
build
process,
making
sure
that
it
still
works.
So
we
made
some
pretty
good
progress
with
it.
K
I
was
hoping
to
get
cfs
in
pacific
as
well.
Patrick
said
that
he'll
take
a
look
soon,
so
we
should
be
now
with
this
pretty
soon
since
I'm
here,
do
you
guys
have
any
suggestions
or
questions
about
windows,
the
windows,
sporting
effort.
J
I
guess
patrick
isn't
on
the
call
we
talked
about
it
briefly
this
morning
in
the
the
leads
call-
and
I
think,
generally
speaking,
everyone
is
pretty
excited
about
it,
but
patrick's
concern
was
around
making
sure
that
we
have
sufficient
bi
integration.
So
not
just,
I
guess,
build
tests
and
also
some
testing,
I
guess
in
order
to
ship
stuff.
So
I
don't
really
know
what
what's
on
the.
J
I
know
that,
like
part
of
the
plan
is
to
do
like
ci
integration,
so
that
we're
producing
we're
actually
doing
builds
and
we
can
catch
regressions
on
that
end.
But
what
about
on
the
testing
side?
K
Yeah
sure
so
we
didn't
put
some
of
the
functional
unit
tests
mostly
around
us
and
that
helped
us
find
a
lot
quite
a
few
bugs
when
porting
the
library's
windows
and
we
have
a
plan
to
expand
the
test
coverage
and
then
automated
testing
at
first
we
might
run
it
on
our
environment,
but
then
we
might
do
it
for
every
patch
that
gets
smudged
in
that's,
that's
right,
so
yeah
what
we
do
have
at
the
moment.
K
So
here's
an
example:
we
have
mingw
job,
that's
running
with
the
fci,
so
here's.
J
J
K
Sure
it's
actually
being
discussed
between
cloud
base
and
some
other
interesting
parties,
but
even
if
we
don't
integrate
some
windows
last,
you
know
the
official
sky,
we're
probably
going
to
do
some
testing
on
our
own
side.
K
K
Yeah,
so
here
are
some
test
results
that
we
have
that
this
are
quite.
We
got
only
for
the
parking
purposes
and.
A
Yeah,
I
guess
I'm
wondering
if
we
could
like
run
some
of
these
kind
of
basic
test
cases
without
a
full-fledged
like
windows,
setup.
K
Yeah
sure
that's
what
you
plan
to
do
so
we
have
those
radio
stats
that
already
have
been
ported
and
then
we
are
going
to
look
into
rpd
and
which
are
the
main
components
that
we
use
on
windows
at
the
moment,
and
I
also
have
those
scripts
that
have
aggregated
those
results
and
did
the
test
running
because
we
building
on
linux
using
gw
you
can.
We
cannot
just
use.
K
Other
than
that
I
saw
that
people
are
really
eager
to
use
cell
phone
windows.
We
had
plenty
of
feedback
on
mail,
I
mean
which
are
either
contacting
us
directly
or
through
seth
channels.
So
that's
that's
a
good
thing.
K
We
already
have
some
some
beta
pins,
which
are
updated
daily.
I
can
paste
the
link
here
in
case
if
anyone
is.
K
Interested
so
yeah,
the
only
issue
with
the
the
windows
bits
is
that
it's
much
easier
to
install
them
with
using
an
installer.
So
I'm
actually
we'll
eventually
get
that
cold.
K
J
J
K
Okay,
we
have
to
discuss.
Let's
see,
I
mean
okay,
if
we
are
going
to
integrate
some
windows
notes
with
a
of
sims
fci,
or
are
we
going
to
do
it
separately?
K
G
J
K
For
effects
we,
while
developing
this,
we're
mainly
using
some
four-part
drivers,
which
might
be
useful,
but
it
also
also
be
useful
to
watch
tv
tests
and
then
make
sure
that
we
actually
run.
K
It
I'm
not
sure
for
bsd.
Are
you
running
any
tests
already
or
is
it.
E
H
A
Yeah
thanks
for
waking
up
this
early
press,
not
the
best
time
for
you,
yeah
and
anthony
patrick,
couldn't
make
it
this
time,
but
we'll
hopefully
see
you
next
time.