►
From YouTube: 2018-06-19 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Think
that
there
is
not
much
else
left
at
all
Tony
for
the
Mineo
operator.
If
I
recall,
I
think
there
was
like
one
thing
that
I
saw
out
last
time
right
before
the
weekend
about
like
the
documentation
hierarchy
like
where
the
docs
go
and
the
navigation
I
think
that
was
the
only
remaining
item
that
I
was
aware
of.
Is
there
anything
else
that
you
know
if
I
did
you
already
take
care
of
that?
No.
A
B
A
B
B
A
E
A
Okay,
cool
then,
yes,
sir,
this
I'm
not
too
concerned
about
this
issue
in
terms
of
its
scope,
because
it
is,
it
is
really
small
scope
to
change
so
cool.
Thank
you,
Alex!
So
then
1656
of
one
OS
deeper
device.
Sorry
one
boast
epod
four
device,
good
Wyman
in
Travis,
you
guys
are
I,
think
are
both
online
here.
Do
you
want
to
talk
about
that,
and
especially
with
what's
left
and
risk
free,
0.8
milestone,
yeah.
B
Okay,
so
the
yeah
last
last
week,
one
was
outside,
took
a
look
at
it
for
a
couple
days.
It
in
its
current
state
I
feel
like
it's.
It's
in
a
good
place,
there's
I'm,
not
sure,
there's
any
blocking
issues
remaining.
The
upgrade
path
is
working
from
my
testing,
and
you
know
that
was
the
last
risk
area.
I
felt
like
the
the
one
main
issue.
Really
it's
really
independent
of
this,
but
related
is
1776,
which
was
in
the
in
progress
column.
B
That
one
means
that
during
upgrade
or
OSD
restart
in
some
clusters,
it's
not
clear
yet
sometimes
the
the
disc
ID
is
not
recognized,
and
so
we
blow
away
the
old
at
OSD
and
put
a
new
OSD
on
it,
so
that
one
is
there's
a
blocker
408.
We
need
to
understand
that
one
but
I'm
for
this,
that
one
OSD
per
pod
I'm,
not
sure
we
need
to
have
that
block
this
PR
and
then
I'm,
even
thinking
at
war
wondering
if
we
could,
it
might
make
sense.
B
B
Right
and
oh
another
thing
with
that
is
tried
because
we
didn't
see
this
before
we
moved
over
to
you
dev
ADM
in
master
a
few
weeks
ago,
so
I
tried
reverting
the
uid
discovery,
logic
to
use
SG
disk
again,
instead
of
you'd
have
a
atom
and
it
in
my
environment,
where
I
reap
wrote.
It
SG
disk
also
had
problems
in
in
that
cluster,
so
I
it
I
need
to
investigate
more
and
weilman
was
gonna.
Look
at
it
but
yeah.
It's
not
clear.
B
A
C
D
A
Call
some
discussion
about
you
know
the
integration
tests
and
using
devices-
and
you
know
restarting
an
OSD
during
the
integration
tests.
There
was
some
effort
that
you
had
done
on
that
Travis
and
it
was
dependent
upon
you
know
the
one
OSD
pod
per
device
work
to
be
also
merged.
What's
the
status
of
that,
should
we
be
tracking
that
here
as
well.
B
D
Also
for
the
integration
house,
it
is
first
to
have
the
devices
right
now,
but
I
don't
think
I
would
have
it
of
them.
Something
changes
in
last
week.
I
didn't
know
so,
once
we
kind
of
devices
in
the
V
ends
and
we're
able
to
test
the
device,
Ortiz
can
do
it
just
to
them
a
real
test
and
here's
no
bump
on
porosity
and
then
we
can
delete
the
part
we
started.
D
A
We
have
so
to
the
best
of
my
knowledge
in
our
CI
environments.
We
there
are
entries
in
the
matrix
that
have
devices
attached
to
the
VMS,
so
we
have
devices
available
during
the
tests,
but
some
somewhere
somehow
some
time
the
integration
tests
stop
using
those
devices.
You
know
everywhere,
where
we
have
the
option
to,
we
just
say:
don't
use
them.
So
there's
some
some
time
in
the
history
here
we
stopped
using
them
and
I
do
not
know
why
that
happened.
E
I
would
like
to
kind
of
look
Internet
with
the
device
part
they're
kind
of
with
together
with
decay
days,
vagrant
multi-node
environment.
If
you
can
kind
of
part
it
to
you
know
to
Jenkins,
or
you
know
that
we
have
multi
node
for
testing
family
and
then
also
have
not
only
one
device
to
test
it
with,
but
multiple
devices
even
possible.
A
A
Yeah
like
if,
if
what
we
believe
to
be
true
that
you
know
the
CI
environment
in
GC
has
devices,
then
it
should
be
a
simple
change
of
passing
true
to
with
the
function
that
sets
up.
You
know
SEF
for
the
use
devices
argument
right
as
opposed
to
just
passing
false
every
time,
so
that
that
should
be
an
incredibly
simple.
You
know
change
to
get
that
going
again.
B
A
B
If
that's
as
simple
as
a
flag,
then
we
should
just
enable
that
or
attempted
to
enable
that
right
now,
because
it's
a
big,
a
big
part
of
this
is
the
is
that
feature
so
I'll
see
if
I
can
find
that
flag.
It's
just
enable
it.
If
not
today,
then
Waman,
maybe
you
could
take
that
one
out
for
the
next
couple
days.
Yeah.
A
B
B
A
Yeah
my
big
concern
here
is
that,
because
we
have
such
a
significant
amount
of
churn
with
you
know,
OS
T's
and
devices,
and
such
that
you
know
before
we
release
being
able
to
do
the
vetting
that
we
can
inners
continuous
integration
environment
around
that
area
would
make
me
feel
a
lot
more
comfortable
about
the
quality
of
the
release
going
up.
Yeah.
C
B
Restart
PR
that
one
maybe
we
could
just
include
it
in
the
OSD
per
pod,
PR
to
to
run
that
at
the
same
time
before
we
merge
it's
just
a
small
PR
and
the
making
sure
we
have
the
integration
test,
just
enable
the
flag
to
use
the
devices
that
one
we
need
and
otherwise
I'm
not
sure
how
many
new
scenarios
are
needed
in
the
integration
tests,
because
it's
really
the
new
the
same
scenarios,
but
it
yeah
I
agree.
It
needs
lots
of
testing
and
multiple
eyes
on
it.
Yeah.
A
C
A
Try
I
assume
you
all
can
still
see
my
screen
here
where
I'm
now
editing
the
1698
PR
description.
It
do
you
already
have
an
item,
a
checklist
item
in
here,
or
integration
tests
and
using
devices
and
stuff
like
that,
because
I'm
just
gonna
add
one
right
now,
so
we
don't
forget
about
it.
If
it's.
A
C
C
A
Feel
better
about
that,
and
then
that's
so
we've
got
some
so
it
sounds
like
we
have
some
agreement
for
what
we're
gonna
do
for
integration
testing
for
this
PR
and
then
also
some
manual
testing
commitments
that
you
know
you
guys
have
already
done.
Travis
you're
saying
that
it
looks
it's
sort
of
look
good
on
your
end
and
I'm
going
to
take
a
pass
at
it
too
and
I
know
Wyman's
the
testing
for
sure
as
well.
E
A
C
A
So
that
covers
everything,
that's
in
the
in
progress
and
in
review.
Let's
take
a
quick
discussion
and
look
at
things
that
are
in
the
to
do.
Column.
I
think
that
there
there
are
some
items
here
that
are
nice
to
have,
but
the
ones
that
I
think
are
absolutely
necessary.
Are
you
know
the
upgrade
upgrade
guide
verification?
You
know
the
full
end-to-end
upgrade
from
0.8
0.7
to
0.8,
that's
a
blocker
for
the
release
for
sure
the
the
bottom
one
in
that
column
about
the
failed
monitored.
A
So
what
do
you
think
is
the
best
way
to
track
that,
like
opportunistic
pickup
in
a
minor
release,
type
of
thing?
If,
if
this,
the
luminous
release
is
after
0.8
how
to
track
then
yeah?
So
we
just
leave
it
on
this
board
here,
like
I,
think
yeah
like
leave
it
in
the
project
and
yeah,
because
it
to
me
okay,
yeah
cuz,
it's
part
of
the
you
know
our
community
agenda.
We
always
have
you
know
lingering
issues
from
the
previous
milestone
that
we
get
together
and
discuss
so
yeah.
A
A
Okay,
so
leave
the
project
clear,
the
milestone,
alright,
this
issue,
this
freakin
issue,
it's
yeah.
This
is
very,
very
annoying
and
the
last
time
we
discussed
about
it
with
Ilya
is
well.
The
the
consensus
was
that
you
know
our
Jenkins
instance
is
incredibly
old
outdated
and
has
the
legacy
configuration
on
it,
including
the
plugins
Ilya
recommended
to
like
set
up
a
new
Jenkins
environment
without
all
that
legacy
configure
an
outdated
configuration,
but
that
would
be
a
you
know.
Fair
investment,
I
think,
and
you
know,
Ilya
is
out
of
town
right
now.
A
Also
so
I
don't
know
if
we
have
the
right
resources
to
do
that
immediately.
Anyways,
so
I
don't
know
of
any
really
better
ideas
to
address
this
than
the
you
know,
kind
of
the
babysitting,
but
its
necessary
for
when
APR
is
ready
to
merge
and
it
gets
the
Greenbuild
to
go
ahead
and
merge
it.
Not
let
it
sit
around.
Is
there
any
other,
better
ways
to
approach
this
right
now
from
the
community.
E
E
Not
sure,
as
I
only
had
like
weapons
off
a
sexist,
it's
hard
to
tell
because
I
fingers
some
configuration
really
in
a
file
somewhere
on
the
champions.
But
if
not,
we
should
definitely
ask
earlier
suggest
that
we
deploy
it
to
put
it
like
that
or
set
it
up
with
the
latest
version
at
least
front
of
plugin.
Why
is
there
enough
I
think
a
few
security
points
that
well
should
be
a
concern
depending
on
how
we
see
it.
B
A
D
E
So,
just
to
give
an
idea,
I
think
we
could
potentially
add
to
help
wantedly
a
label
here
because
at
least
from
my
perspective,
it's
basically
just
checking.
If
the
pool
well,
you
should
be
created
and
it's
erasure
coded
and
if
it
is,
we
would
need
to
add
one
more
flag
with
a
value
to
the
you
know
to
the
today.
Ibd
create
command,
at
least
as
far
as
I
know,
so.
D
E
If
a
racial
coding
misuse,
so
we
well
to
fix
it,
just
in
you
know
general
bind-off.
What
is
missing
here
is
just
a
flag
that
points
to
the
data
pool.
But
yes,
it
would
require,
to
you
know,
have
changes
to
the
pool
see
earlier.
Then
that's
yeah,
it's
not
using
a
racial
code.
Well,
one
the
erasure
coded
a
data
pool
and
one
for
metadata
which
is
replicated
so.
A
D
E
You
would
still
need
to
create
two
pools
documentation
too.
If
you
want
to
use
a
racial
code,
it's
a
BD
poo.
You
need
to
create
two
pools
and
then
some
logic
kind
of
combining
both
pools,
that
data
pool
and
it's
correctly
set,
but
so
yeah
I
think
it's
a
help
on
anything
because
well
yeah.
We
there
needs
to
be
a
coral
and
a
correlation,
at
least
in
some
way
to
tell
already
create
this.
Is
your
data
pool,
and
this
is
some
normal
method-
do.
A
E
E
E
Say
from
my
perspective,
the
thing
we
should
change
them
is
change
the
documentation,
so
at
least
the
manifests
we
currently
provide
to
not
use
a
racial
coding
by
default,
because
I
think
some
manifests
we
provide
are
using
erasure
coding
by
default
and
well.
Currently,
it's
broken
because
of
that
and
a
good
amount
of
users
were
out
running
into
it.
A
E
A
B
B
E
If
we
have
to
time,
then
it
would
be
good
because
yeah
as
long
as
we
update
the
talks
and
samples
before
the
referrer
zero
today,
the
talks
are
updated
them,
so
people
don't
run
into
it
too
often,
as
I
said
right
now,
I
think
the
D
for
this
Eurasia
coding
for
file
system
and
or
block
starch
to
our
own,
what
not
the
one
on
personal
sure
there,
but
we
should
take
a
look
to
at
least
or
amend
it
in
some
way.
That's
currently
a
bit
broken
for
blocks.
B
A
Deeper
PUD
died,
so
it's
not
it
yeah,
okay,
so
it's
not
in
the
mouse
turn
right
now.
All
right,
excellent,
okay
sounds
good.
Let's
move
along,
then
it's
a
lot
of
discussion
there
on
the
0.8
milestone,
but
yeah.
We
need
to
continue
driving
towards
a
getting
all
the
features
completed
for
0.8
and
then
also
be
keeping
the
quality
at
a
you
know,
a
sufficient
level
in
order
to
release
right.
B
A
B
B
A
D
B
A
B
A
Okay,
it's
yet
so
it's
you
know,
I
think
that
as
we're
kind
of
converging
on
this
release,
you
know
getting,
you
know
all
hands
on
what
can
be
on
where
we
can
all
help
out
on
getting
the
quality
and
everything
completed
is
going
to
be
pretty
important.
So
I'm,
you
know
it
sounds
like
alex,
is
ready
for
that
I'm
ready
for
that.
So
we
can,
as
you
know,
as
a
team
in
a
community.
Get
everything
ready
to
you
know,
have
a
have
a
smooth
release,
so
we
already
talked
about
this.
One
I
believe
correct.
A
E
A
So
I
think
the
big
unknown
there
was
that,
from
my
recollection,
that
what
was
taking
a
long
time
was
was
the
map
operation.
I
could
be
remembering
that
incorrectly
and
it
could
be.
You
know
something
further
further
downstream,
but
I
have
confidence
that
it's
our
BD
map
that
was
taking
a
long
time
and
I
recall
that
it
was.
That
was
not.
You
know
well
known
that
that
could
sometimes
take
a
long
time,
so
it
seemed
like
it
was
some
unique
to
our
experiences.
Does
anyone
else
have
an
idea
about
that
and.
B
C
A
C
D
A
C
Just
have
a
little
bit
of
a
speculation
that
good
morning
I
mean
one
pathological
case
is.
If
somebody
has
a
very
unhealthy
or
unavailable
cluster
of
OS
T's,
then
the
RB
d
client
may
simply
block
until
it
can
access
the
the
underlying
rails
cluster.
So
if
they
have
a
seriously
unhealthy
cluster,
then
that
might
manifest
itself
as
my
RB.
My
operation
took
a
long
time,
but
if
it's
only
happening
with
particularly
large
RVD
images
and
that
wouldn't
apply
I
see.
C
Me,
the
last
thing
I
noticed
from
reading
that
the
workers
shoot
well,
they
see
was
that
so
he's
pointing
out
the
deleting
volumes
too
slow
too,
and
that
definitely
is
true,
because
waiting
the
objects
in
an
RPD
image
is
a
orderin
operation.
If
they've
been
written
to
write.
D
Beyond
just
the
serf
issues,
the
if
you
have
an
operation
which
can
be
very
time
consuming,
it
might
be
worth
looking
at.
Can
we
separate
the
phases
of
return?
You
know
how
long
it
takes
to
complete
so
like
the
volumes
available,
the
filesystem
isn't
created
upon
it.
Yet
that
essentially
lumping
stuff
is
inviting
your
storage
vendor
to
take
it
a
long
time
to
do
some
operations.
A
Good
point
Gately
all
right,
so
we
will
follow
up
on.
You
know
a
you
know:
a
reproducible
instance
to
figure
out
more
there
Alex
your
multi
node
dev
environment
for
helico,
you
crossed
out
Mac
and
for
every
flip.
So
that's!
What's
the
status
on
that
now,
Travis
you've
been
using.
That
too,
haven't
you
yeah.
E
E
E
A
E
E
Keep
it
separated
so
I
can
develop
it
further,
but
I'm
going
to
check
it.
So
basically
what
I've
thought
about?
We
have
this
many
cubes
SH
script,
which
kind
of
sets
up
two
mini
coop
environment
for
us,
like
loading
images
and
stuff
into
it,
and
what
I
thought
about
is
simply
creating
kind
of
the
same
script
but
while
checking
out
okay.
This
background
multi
notes
repository
then
starting
after
cluster
running,
to
make
comments
to
then
set
up
the
environment
for
us.
So
basically
like
a
mini
cooper
sage,
okay,.
A
That's
fine
yeah,
and
then
that
was
a
you
know,
a
really
really
impressive
effort
Alex
to
have
that
you
know
a
multi,
node,
node,
multi-disc,
configurable
environment,
to
run
kubernetes
in
on
the
developments
laptops.
That
was
that
was
really
big
to
be
able
to
just
whip
that
out
so
I
know,
Travis
Larry
grateful
you.
E
C
A
After
that
was
a
great
effort
from
Sebastian
as
well,
definitely
definitely,
but
for
us
us
Mac
losers,
this
this
vagrant
solution
is
gonna,
be
very
helpful.
All
right,
that's
good!
That's
great!
Then
I
had
an
item
about
so,
as
you
mentioned
earlier
in
this
meeting,
the
cockroach
DB
operator
has
been
merged
into
master
and
the
we
go.
So
we
only
have
one
master
build
after
that.
That
has
published
the
new
container
images
for
cockroach
DB.
A
They
have
a
tag
of
dirty,
implying
that
on
the
build
machine
during
the
build,
there
is
a
change
in
the
tree.
That's
happening
during
build
time,
and
so
that's
what's
causing
the
tag
to
be
dirty,
meaning
that
the
you
know
working
copy
has
changes
in
it,
and
I
cannot
reproduce
this
locally.
When
I
try
to
build
and
I
cannot
connect
to
the
Jenkins
instance.
Does
anyone
have
ideas
about
how
possibly
to
figure
out
what
this
call
it?
The
cause
of
this
is
and.
A
A
A
E
It's
kind
of
there
are
mostly
well
small
changes.
At
least
the
one
from
stage
is
Whitley,
also
having
with
our
was
the
working
CI.
To
put
it
like
that,
a
failure
so
I'll
ping
ping
him
again.
This
one's
already
done
as
written
yeah
for
this
one
I'm
going
to
wait
a
few
more
days
and
then
try
to
see
if
he
still
going
to
pick
it
up
and
fix
it.
Now,
as
we
discussed
it,
if
not
well,
I'll
pick
it
up.
E
E
A
C
E
A
Yes,
age
is
okay,
he's
all
right,
cool
okay,
so
that
was
everything
we
had
on
the
agenda
here.
So
though,
the
big
thing
is
still
driving
for
the
0.8
milestone
and
we
discussed
that
pretty
heavily,
so
no
need
to
continue
harping
on
that
one.
Were
there
any
other
agenda
items
that
did
not
make
it
to
the
stock?
That's
people
wanted
to
bring
up
now.