►
From YouTube: 2017-11-20 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
D
B
D
A
Essentially,
the
proposal
is
open.
There's
been
a
bunch
of
public
comments,
I
believe
the
schedule
is
that
there
is
gonna,
be
a
discussion
on
the
December
5th
face-to-face
meeting
that
TLC.
The
next
meeting
is
December
5th
8:00
a.m.
Pacific,
which
will
be
then
at
what
time
in
Austin,
10:00
a.m.
I,
guess
and
it's
gonna
be
face
to
face
in
Austin
that
open
to
the
public
I
would
assume
it
is
it's
a
toc
meeting,
so
I'd
assume
that
there
will
be
yes,
Tuesday.
A
A
It's
a
super
majority
or
the
project
be
accepted,
but
that's
the
that's.
The
Parliament
really
there's
no
change
nothing!
I,
don't
think
anything
that
I
mean
I'm.
Gonna
help
answer
questions.
There
is
going
to
be
possibly
one
or
two
TLC
contributors
that
are
going
to
do
a
technical
due
diligence
between
now
and
then
so.
I
guess
just
be
prepped
to
answer
questions
if
they
come
up.
Otherwise,
there's
there's
really
no
no
action.
A
A
B
Yeah,
and
for
that
Bassam
it's
it
sounds
like
there
is
a
new
regression
upstream
somewhere
I'm,
not
sure
if
it
was
in
coral
s
specifically
or
if
it's
in
kubernetes
1.8,
dot
X,
but
the
agent
now
or
the
operator.
Sorry
is
no
longer
being
told
where
to
you
know,
store
flex
volume
plugins,
even
if
it
has
been
set
on
the
cube
lit.
That
information
is
not
going
to
the
operator
anymore,
so
some
sort
of
regression
somewhere
that
that's
like
a
read
that
makes
the
situation
even
worse.
Yeah.
A
You
know
it's
not
it's
not
as
supportive
API.
Yet
so
I
wonder
if
they've
changed
things
around
with
that
config
API
the
node
configure
it
yet,
but
both
of
those
both
of
those
are
alarming
to
me.
They're,
you
know
we
should
figure
out
how
to
work
around
them
or
what
we
need
to
do.
I,
depending
on
one
0
when
0.7
is
I'd,
argue
that
if
we
have
a
fix
for
these,
we
should
we
should
put
in
zero
point.
Six
and
release
is
zero.
Six
one,
but
it
does
seem,
like
you
know,
we're
what
broken
and.
C
D
C
A
D
Is
supposedly
also
had
it
on
chorus,
and
most
people
in
the
issue
also
seem
to
have
it
on
chorus
and
after
update
from
one
seven
nine
to
one
eight
I,
don't
know
if
it
was
one
eight
one
or
one
A,
two
did
I
upgraded
to
I
also
got
an
issue
that
I
yeah
it
just
used.
The
user
look
acts
like
I'm
setting
you
in
fix
the
issue
in.
D
B
A
A
D
A
A
Who
wants
this
I?
Think.
B
A
B
Yeah
I
think
we
don't
have
a
clear
answer
on
that
Bassam,
but
I
think
this
is
also.
You
know
a
case
that
I
don't
think
that
Dimitri
was
trying
this
case
before
you
know,
with
a
multiple
terabytes
sized
volume,
so
I
think
it's
kind
of
a
new
case
that
wasn't
necessarily
tested
with
the
old
RVD
plugin.
That's
maybe
potentially
why
we
didn't
see
it.
B
Yeah,
we
could
definitely
compared
against
that
behavior
and
make
sure
they
were.
You
know
not
out
of
out
of
whack
with
with
what
our
approach
is:
yeah
yeah,
the
general
dinner,
the
general
concept
of
you
know
basically
creating
a
lock
before,
were
you
know,
completed
making
sure
we
clean
up
that
lock.
If
this,
you
know,
if
that
note
is
abandoned,
is
you're
gonna
be
important.
B
A
A
B
And
it's
so
in
something
else.
It's
important
to
remember,
though,
is
that
the
normal
failure
path
is
that
you
know
career
days
will
retry
and
in
the
vast
majority
of
failures,
I've
seen,
you
know
the
retry
fix
it.
We
don't
leak
anything
there.
The
what
happens
here
is
that
when
there's
user
intervention
to
kill
the
pot
and
get
rid
of
it
before
you
know,
we
need
retry
gets
to
reach
eventual
consistency.
That's
when
we
have
this,
so
obviously
it's
something
it
needs
to
be
fixed,
but
it's
not
the
default
failure
path.
Here,
oh
well,.
A
A
A
B
A
B
A
B
D
B
D
A
D
D
A
A
B
C
A
A
You
know
all
these
things
are
remaining
and
the
garbage
collector
is
not
cleaning
them
up,
because
it's
all
about
right,
so
yeah
so
I
think
that
we
should
should
look
at
I
know
there
are
tools
to
do
that.
I'm
sure
you
can
dump
it
look
at
the
frequency
of
objects
in
the
heap
and
what
kind
they
are
non.
D
D
C
B
C
A
B
D
C
A
Okay,
but
what
I'm
thinking
is
that
we
we
don't
do
anything
special
for
Q
Khan.
We
keep
on
the
path
that
releasing
zero
seven
I
mean
we
could
do
zero
six
one
before
coupon.
If
we
have
these
fixes,
but
for
we
just
keep
marching
towards
zero
seven,
which
will
happen
after
Q,
Khan
I,
don't
know.
If
there's
anything,
you
know
that
would
change
or
is
needed
for
Q
Khan.
A
A
B
A
C
C
D
D
We
don't
need
to
update
the
docs
I,
don't
know
its
name
anymore
is
still
running.
Kubernetes
1.5.
We
through
I
think
with
almost
the
latest
version
of
Frug
and
I,
know
France.
It
is
on
problems
sometimes
and
I,
don't
know
at
least
I'm
not
sure
if
it's
just
not
related,
if
it's
related
to
kubernetes
1.5
or
if
it's
really
an
issue
for
roof.
Yeah.
A
C
C
A
A
C
A
C
C
C
A
C
B
A
C
A
A
D
Or
maybe
not
a
big
thing
depending
on
what
maybe
what
comes
out
in
a
discussion.
But
there
is
some
issue
from
my
side
about
the
blue,
star
directory
block
size
ratio,
yeah
well,
yeah,
I
kind
of
would
like
to
have
an
answer
right
now,
because
because
I
would
like
to
implement
a
change
because
I
know,
I
have
running
a
custom
little
frog
which
I
don't
like
it.
A
D
B
You
know
file
system
size
up
its
that
was
kind
of
I.
Don't
think
it's
really
reaching
it
with
the
goal
that
intended
to
because
over
time
you
know,
there's
no
checks
later
on
to
prevent
it
from
happening.
It's
just
an
upfront
thing,
so
I
think
that
your
proposal
to
just
the
most
simple
thing
to
do-
and
you
know
it
ends
up
the
same
equivalent
amount
of
safety.
Is
you
know
just
having
it?