►
From YouTube: 2018-05-22 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
so,
first
of
all,
I
just
copied
the
template
and
the
one
issue
I
added
there
real
quick
was
someone
from
the
community
opened
a
pull
request
which
I
didn't
get
a
chance
to
look
closely
at,
but
it
looks
like
well
the
luminous
release
was
it's
been
a
few
weeks?
At
least
it's
not
included
in
our
latest
ODOT
7
release,
so
they
were
requesting
that
we
spin
a
new
build
to
pick
up
12
to
5.
A
B
C
B
A
B
A
Click
the
unmute
button
actually.
Well,
that's
not
doing
anything
for
me.
So,
okay
in
know,
Alex.
We
could
not
hear
you.
I
can
see
your
chat,
but
I
cannot
hear
you
and
you
looks
like
you're
unmuted
now
from
software,
but
I
don't
know
about
your
hardware.
You
know:
okay,
cool
search,
apps.
You
will
follow
up
on
that
particular
issue
from
the
community
member.
Then
right.
A
D
A
A
A
A
Thank
you
very
much
for
your
help
on
that
Travis
that
really
really
helped
us
speed
up
the
process
and
Tony
as
well
to
get
that
ready
to
much
closer
to
getting
to
master
the
feedback
on
the
PR
is,
is
completed
locally
and
my
local
copy
and
I
have
not
pushed
that
the
P
and
heard
that
I
don't
know
of
any
other
steps.
Besides
a
final
reviewer,
you
know
clicking
the
approval
button
right
in
my
final
green
build.
A
B
A
A
B
B
A
Personally
have
because,
because
of
the
frequency
of
commits
I've
seen,
maybe
if
you
know
five
or
six
commits
a
day,
I
suddenly
had
held
off
on
repeat
because
I
made
the
assumption
that
it
was
under
a
lot
of
churn
and
still
is.
Do
you
think
it's
to
a
place?
Are
those
changes
that
are
there's
a
frequent
commits
that
are
happening
or
those
small
tweaks?
No,
and
it
was
ready
for
me
to
take
a
pass
on
it.
Reviewing.
B
A
Good
yeah
I
would
definitely
like
us
to
pay
very
careful
attention
to
upgrade
considering
how
much
effort
we
have
put
on
the
upgrade
supporting
upgrading
automatic
some
automatic
migration
features
for
going
from
just
supporting
staff
to
supporting
multiple
storage
types.
Yes,
I
would
like
to
keep
our
upgrade
story
intact.
C
Religion,
ship
to
local
volume
upstream
and
a
convergence
path
for
every
pieces,
I
think
the
last
committee
meeting
we
have
talked
about
maybe
starting
a
discussion
there.
I
know
there
was
some
effort
in
a
couple
of
requests
to
to
talk
about
improving
local
volume
and
sorry
I'm
I'm
personally
curious
about
how
this
work
dovetails
into
that.
B
Yeah
I'm
not
sure,
since
our
last
meeting
I've
seen
follow
up
on
that.
Yet
that's
that's
a
good
one
to
follow
up
on
still,
you
know
the
the
thought
was
that
our
discovery
pod
wouldn't
be
needed
anymore
after
local
volumes
or
or
available
that
we
know
that
the
operator
would
be
able
to
consume
the
local
volumes
and
in
place
of
provisioning
the
the
raw
devices
but
yeah
that
needs
follow-up.
A
We
from
the
Brook
community
do
we
have
a
anybody
who
is
kind
of
on
point
or
a
particular
specific
liaison
for
that
feature
to
the
storage
dig
because
I
think
we've
kind
of
it's
in
terms
of
knowledge
and
ownership
of
you
know
integrating
local
volumes
with
Brooke.
It's
kind
of
passed
along
a
number
of
people
know
and
is
there
anyone
who
is
kind
of
has
you
know
a
bit
of
a
more
of
an
on
point
responsibility
for
that.
B
B
D
B
C
Was
also
the
discussion
and
one
of
the
poll
requests
are
the
OSDs
not
finding
each
other
if
they're
in
containers
on
the
same
host
and
then
I
think
Sebastian
said
that
he
tested
it
and
that
he
thought
it
was
all
good,
but
we
we
couldn't
really
identify
what
changed
in
self
to
fix.
That
and-
and
it
would
make
me
a
la
you
know,
I'll
feel
a
lot
better
if
we
actually
can
point
at
what
changed
to
fix
that,
because
it
was
a
pretty
significant
issue
in
the
past.
C
A
The
only
thing
I
remember
to
some
about
that
conversation
or
I
might
be
confused
with
another
conversation
was
a
and
when
you
have
more
OSD
processes
running
they
exhaust
the
paid
space.
Is
that
it
that's
an
entirely
different
issue
than
what
you're
talking
about,
because
that's
what
I
recall
there
being
a
context
of
Oh
something
fixed
it,
but
we're
not
sure
what
it
was.
It.
C
C
D
D
D
C
B
B
A
D
C
D
B
B
C
B
B
What
I
wanted
to
make
sure
I
understood
the
security
model
and
okay
worked
on
it
for
a
bit,
and
you
know
if
it's
well,
there's
there's
one
guy
I've
heard,
who
was
who
wanted
to
try
out
an
open
shift,
so
I
might
just
give
him
a
private
in
the
meantime
and
then
I'm
not
sure
the
order.
Yet
if
it
could
be
good
just
to
go
ahead
and
merge,
the
open
shift,
I
think
waiting
for
the
security
model
would
just
be
a
nice
thing
to
do.
C
C
C
C
A
B
A
A
It
that's
correct,
yeah
I,
wonder
if
we
could
get
a
like,
maybe
half
an
hour
or
less
of
Ilias
time
to
see
if
he
can
poke
around
bit
or
if
he
knows
of
any
configuration
that
might
help
this
so
I
will
I'll
talk
to
Ilya
since
I'm
in
Seattle
with
him.
I
can
give
him
a
little
physical
poke
and
see
if
he
can
take
a
look
at
this
and
help
us
out.
A
It's
1721,
that's
an
issue
that
has
existed
for
since
the
beginning
of
our
agents
in
Flex
volume.
Implementation
I
can't
believe
that
we
did
discover
this
until
just
recently
and
it's
certainly
a
nice
to
have.
But
this
issue
has
been
there
for
a
long
time
that
has
not
necessarily
had
any
or
many
you
know,
production
instances
who
Travis.
You
said
that
someone
was
interested
in
working
on
this
from
the
community
yeah
his.
D
B
A
E
E
E
So,
there's
a
change
cogeneration
script,
a
script
of
1664,
it's
kind
of
problematic
because
I
don't
have
a
Mac,
so
I
can't
really
verify
a
fix
on
well,
unless
I
have
a
Mac
or
well,
here's
the
VSD
virtual
machine
to
to
check
if
it's
working,
so
I'm,
I'm,
probably
going
to
sit
with
Gareth
service
or
someone
with
a
Mac
look
how
how
I
can
fix
them
quickly?
No
because
it's
rubbish,
it's
a
small
thing
to
fix.
E
A
E
E
We
spawn
three
with
five.
We
still
only
spawn
free,
but
if
the
user
gives
the
mana
count,
it
will
use
that,
no
matter
how
many
nodes
exist.
So
if
the
user
gives
let's
say
five
as
among
count,
but
unless,
like
one
note
kind
of
as
how
our
CI
works,
it
will
yeah
always
spawn
there.
Five
months
done,
it's.
E
A
Who
was
that
that
was
just
talking
with
that
useful
information?
This
is
Blaine
awesome.
Thank
you.
Blaine
I
appreciate
that
and
actually
Blaine
I
wanted
to
talk
about
1404.
Also
that
I
know
you
and
Bassam
had
been
in
a
conversation
recently
about
your
using
the
upstream
images
from
the
set
container
project.
What
is
this
latest
status
on
that?
If
you
guys
could
summarize
I.
F
F
Yeah
so
there's
like
60
megabytes
still
unaccounted
for
in
in
that
image
and
I
I,
don't
know
exactly
what
you
know.
What
that
is.
I
I
guess
have
a
theory
that,
because
the
the
yum
package
manager
doesn't
have
the
concept
of
like
recommends
dependencies,
it
only
has
like
hard
yes
or
no
dependencies.
It
may
be
pulling
in
some
of
what
would
otherwise
be
recommends.
D
Although
I
objected
the
entire
concept
to
being
over
tentative
about
the
size
of
the
image,
I
did
do
some
experimentation
with
that
less
fireball.
Suppose
we're
gonna
do
so.
We
got
to
go
through
it
see
if
I
could
classify
something
stuff
and
we
see
what
the
major
differences
they're
like
even
bit.
Okay,.
A
F
Yeah
the
additional
packages,
so
the
this
CentOS
image
also
has
I
scuzzy
packages
in
it,
which
the
a
bunch
of
image
doesn't
so
that
accounts
for
at
least
like
according
to
like
William,
does
the
install
that
I
think
was
like
46
megabytes
or
something,
and
then
the
CentOS
base
image
also
is
a
about
50
megabytes,
larger
I
think
something
like
that.
Maybe
it
was
oh
I
think
it
was
more
closer
to
like
90-something
megabytes.
F
So
like
that
accounts
for
some
of
the
size
and
then
there's
just
60
megabytes
left
over
that
I
I
wasn't
sure
where
I
was
coming
from.
A
A
F
F
A
Cool,
oh
yeah.
Thank
you
very
much
Blaine
for
for
the
update,
and
also
also
your
consideration
about
merge
conflicts
and
that
they're
so
apprec
appreciate
it
very
much
all
right.
So
I
think
that
covers
basically
the
big
things
for
0.8
there's
some
items
that
we
disgust
or
not
going
to
be
in
the
mouse
time
very
likely,
some
nice
to
haves
and
the
core
issues
that
we
are
most
concerned
about,
I
think
have
all
been
discussed.
E
E
E
B
E
B
A
A
Do
their
own
negotiation
to
figure
out
what
the
explicit
that
the
operator
tells
them
to
do?
Okay,
great
thanks
for
making
that
up.
Alexei
I
appreciate
that
it's
the
last
issue
that
I
had
on
the
agenda
here
was
about
the
multiple
storage
for
writers
and
I
said
that
we
talked
about
that
later.
But
we
kind
of
already
addressed
that
we
know
that
all
the
planned
for
items
are
completed
and
the
final
feedback
from
Travis
is
being
integrated
and
we
will
have
that
locally.
I
finished
that
on
the
bus
this
morning.
A
So
we
need
to
test
that.
Never
do
another
test
fast
to
make
sure
that
everything
seems
sane
and
that
that's
all
the
other
feedback
that
I
know
on
that
power
request.
So
if
there's
anybody
else
who
has
feedback
on
the
pull
request
force
this
issue
for
a
refactoring
for
multiple
storage
providers,
the
pull
request
is
1678,
then
that
would
be
great
to
go
ahead
and
get
that
feedback
in
because
otherwise
we're
need
to
go
to
master
very
soon.
B
Yeah,
if
someone
wants
a
summary
of
the
changes,
one
thing
you
might
just
look
at
briefly
is
the
pending
release.
Notes
in
that
PR
I
was
just
gonna
post
a
link
to
that
just
to
see
what
changed
from
user
perspective
and
are
there
any
concerns
at
that
level,
even
but
I
think
they're
pretty
basic.
Not
that
I
would
hope
it
changes
at
this
point,
but
something
to
to
look
at
as
a
first.
A
A
I
hope
will
make
things
smooth
for
people
as
well,
because
there
was
a
lot
of
thought
that
was
put
into
you,
know
the
migration
process
and
not
you
know,
breaking
people's
existing
clusters,
so
I
hope
that
this
will
be
smooth
that
I
am
NOT,
making
a
guaranteed
statement
at
will,
because
those
are
famous
fast.
The
last
words
right
so.
C
A
Yes,
that
is
correct
bassam.
Those
are
those
implementations
are
in
a
holding
pattern
and
their
own
Forks,
their
own
branches.
Just
waiting
for
the
view
this
refactor
to
get
into
master
and
then
Tony
and
I
will
complete
those
the
implementations
with
the
the
latest
for
master
and
test
them
and
make
sure
that
they're
good
to
go
the
better
part.