►
From YouTube: 2018-08-28 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
going
down
the
list
here
we
have
a
couple
I
think
of
things
to
talk
about
for
a
0.8
patch
release
with
this
ticket.
Here,
that's
in
review,
I
believe
that
it's
been
fixed
and
master's.
We
just
need
to
back
board
the
fix,
but
Travis
do
you
want
to
go
ahead
and
talk
about
this
race
condition
for
loading
the
Flex
drivers,
yep.
B
So
yeah
this
is
fixed
and
master,
there's,
basically
a
bug
in
kubernetes
1.11
that
if
you
load
flex
drivers
too
quickly
like
we,
we
create
for
currently
for
the
different
names
that
we've
supported
in
the
last
couple
releases.
And
anyway,
if
you
load
them
too
quickly,
the
Flex
doesn't
load
them
and
your
mounts
will
fail
to
initialize.
B
So
someone
from
the
community
I
forget
his
name
verified
that
this,
the
master
master
builds
with
the
fix,
worked
for
him
where
he
was
seeing
this
and
proposals
to
back
ported
and
when
ready
to
get
it
build
out,
it
would
be
in
there
it's
low
risk.
Really.
It
just
puts
us
sleep
in
for
him
just
slowly
little
deflects
drivers
like
five
hundred
milliseconds
between
each
one,
so.
B
B
B
A
With
her
agent
how's
that
mute
I
personally,
don't
know
of
any
verification
path
to
check
if
a
driver
has
been
loaded
and
recognized
by
the
cubelet.
So
since
we
don't
have
you
know
any
way
to
verify
that
then
I'd
I
can't
personally
without
you
know
digging
for
much
longer
time
and
trying
to
find
maybe
some
hacker,
workaround
I,
don't
particularly
know
of
a
specific
way
to
do
this.
I
wasn't.
D
D
B
A
B
A
B
D
D
A
C
D
D
The
problem,
what
you
mean
with
right
now
there
have
been
people
creating
issues
about,
and
it's
even
resolving
this
issue
here
where
such
get
in
this
whole
thing
yeah.
This
issue
is
resolved
by
merging
my
PR,
my
original
PR
1959,
which
is
well
fixing
it
in
master
now,
but
it
well
wasn't
back
farted.
Yes,.
C
C
A
All
right,
yeah
I,
think
that
you
know
in
general
that
the
risk
the
potential
risk
associated
with
the
more
complicated
feature
is,
is
prudent
or
wise
to
potentially
avoid
first
scope
to
patch
releases.
Is
this
1921?
Is
this
I,
don't
know
anything
about
the
status
of
this?
This.
B
A
C
B
C
C
A
A
And
and
when
you
say,
simulate
I'm,
not
sure
exactly
how
that
would
be
done,
I
think
it
I,
don't
see
that
this
reproduce
incorrectly
or
at
all.
Unless
you
know
an
actual,
you
know
the
real
cause
under
underlying.
You
know
why
they
are
BT
kernel,
module
or
whatever
it
is.
That
takes
a
long
time
to
mount.
It
actually
happens.
So
I'm
not
sure
what
me
say
simulation
what
we've,
how
we
hope
to
achieve
that.
B
C
C
A
To
hit
all
right
so
we'll
follow
up
on
that
as
well.
I
seem
it
sounds
like
it's
not
very
incredibly
pervasive.
Still,
you
know
it
seems
like
we're
kind
of
narrowing
in
or
converging
on.
You
know
what
the
potential
scenario
may
be
and
it
does
definitely
seems
less
widespread,
significantly
and
less
widespread
after
1.10
or
1.11.
When
the
entry
mounter
was
changed
her
with
the
specific
tools
that
used,
I
think
so,
we've
seen
less
of
it.
There's
no
question
about
that.
C
A
Okay,
so
this
issue
here
2043
I-
wanted
to
draw
some
attention
to
it,
because
so
this
is
from
Dimitri
and
you
know
Pacific
research
project
of
the
San
Diego
supercomputer
center
guys
they
run
a
fairly
large
cluster
and
they're
they've
been
an
adopter
of
the
rook
project
for
a
while.
So
Alex
you
had
looked
into
this
with
Dimitri.
Do
you
have
latest,
then
it
looks
like
they're
blocked
on
this
as
well.
So
I'd
like
to
get
some
more
attention
on
it.
Potentially,
you
know
I
can
take
a
look
at
it.
D
D
A
Am
familiar
with
that
code
and
the
OSD
orchestration
to
try
to
kind
of
watch
and
synchronize
the
completion
of
the
OSD
Orchestrator,
so
I
and
I've
I've
run
into
an
issue
like
about
a
watch
channel
in
kubernetes
being
closed
before
and
The
Rock
API
server
when
it
still
existed
so
I'll.
Take
a
look
at
this
to
try
to
see
if
I
can
understand
this
scenario
a
little
bit
better.
D
I
think
Dimitri,
as
opposed
to
my
deep
black
outfit
I,
think
that
the
black
output
I
added
in
annex
in
a
test
image
kind
of
okay.
There
we
go
it's
in
my
commentary,
right.
D
You
scroll
yeah,
it's
scrub
it
down
there
interest
with
test
1
and
test
2,
where
I
kind
of
looked
into
what
is
getting
output
in
a
certain
situation
and
test.
One
is
what
is
out
there:
getting
output
they're,
getting
outputted
when
Esther
object,
Brandis
error
happens
with
the
standards
country.
My
personal
channel,
closed,
yeah.
A
This
looks
exactly
like
the
watcher
certain
and
especially
since,
to
meet
you
said
it
takes
like
20
minutes
this.
It
looks
exactly
like
a
watcher
channel.
That's
getting
closed
because
of
an
HTTP
keep
alive
a
timeout
on
the
to
the
API
server,
and
so
the
fix
that
we
made
in
the
API
server
KPI
server
may
need
to
be
done
here
as
well.
No.
D
D
A
D
A
A
B
D
D
Everything
and
our
few
once
a
fish
that
could
get
in
like
twenty
to
fifty
to
like
to
update
companies.
Minimum
version
should
be
ready.
The
Howlin
thing
is
I
hope
I
get
to
that
again
today
or
tomorrow
them
and
but
there
are
still
some
which
I
think
haven't
even
gotten
a
review
so
well
and
well.
One
thing
I
want
to
mention
is
that
yes,
Ben
Siegel
Maya
here
window
call.
D
D
A
Great
and
I
did
a
little
bit
ago,
where
is
it?
I
went
through
the
0.9
milestone
and
added
help
wanted
to
all
the
items
that
did
not
have
owners
assigned
to
them.
Yet
because
we
have
a
fair
amount
of
that.
You
know
we
were.
We
chose
to
be
very
aggressive
for
0.9
and
hopefully
to
find
some
contributors
from
the
community
to
take
on
some
of
these
issues,
so
everything
is
labeled
with
Help
Wanted
that
doesn't
have
an
owner.
Yet.
B
A
Yeah
I
think
that
you
know
that,
with
a
lot
of
that
could
be
trait.
That
could
be
very
variable
because
we
have
so
much
scope
in
this
milestone
and
lack
of
ownership
as
well,
but
we
definitely
I
would
agree
that
we
would
end
up
scoping
it
so
that
we
have
a
release
for
before
cube
con
okay,
great
leaned,
that
asset
yeah.
It
was
awesome,
hey,
Blaine,
hi,
okay,
so
Travis.
This
has
been
annoying.
You
significantly
recently
about
our
integration
test
unreliability
and
we
have
not
devoted
the
resources
to
attack
this.
A
B
I
feel
like
I've
and
I've
spent
some
time
and
been
attacking
some
issues
and
when
I
finally
find
one
issue,
then
the
next
build
almost
feels
like
brings
up.
Oh
there's,
another
random
issue,
then
just
to
this.
What
is
that
so
I
think
there's
I
don't
feel
like
there's
a
real
pattern.
Other
than
that
you
know
our
tests
by
nature
are,
you
know
we're
waiting
for
Cooper
knives
to
do
something
and
retrying
so
I
think
the
pattern
of
retrying
I
mean
it's
kind
of
a
kubernetes
pattern.
B
That
way,
and
in
some
places
we
aren't
retired
trying
where
we
need
to
that
may
be
a
small
thing,
but
other
places
like
well.
We
just
fixed
the
Flex
driver
load
issue
where
occasionally
that
would
hit
us
there's
I,
don't
know
I,
don't
feel
like
there's
significant
patterns
around
it,
but
when
we
see
one
what
we
just
need
to
make
sure
we
track
it
and
squash
it,
especially
if
we
see
anything
in
a
PR
that
we're
introducing
like,
let's
not
introduce
any
new
random
issues.
B
A
I
think
yeah.
What
I've
seen
so
far
is
that
this,
this
kind
of
feels
like
a
bit
of
tribal
knowledge.
So
far
of
you
know,
we
see
failure
and
we
say:
oh
that's
a
new
one
we
know
about,
but
we
haven't
necessarily
captured
it.
You
know
with
the
specific
symptoms
or
the
specific
error
messages
so
that
they
could
be.
You
know
more,
you
know
analyzed
later
on
and
kind
of
you
know
attacked
later
on.
So
I
think
that
yeah
we
could
do
start
doing
a
better
like
that
might
be
the
first
step
I.
A
Think
of
when
we
see
one
of
these
recurring
failures,
let's
capture
it
in
it,
I
don't
know
if
it
necessarily
needs
to
be
in
its
own
issue,
but
you
know,
at
least
in
the
tracking
issue
that
we
have
for
integration
test
unreliability.
Let's
start
capturing
that,
and
so
we
have
it.
You
know
in
a
single
place
that
you
know
it
can
be
looked
at
as
a
community
or
whoever
you
know
might
want
to
be
able
to
tackle
that.
You
know
resources
that
could
tackle
these
integration
and
test
on
reliability.
Issues
and
you
know
have
it.
A
A
And
if
I
have
found
that
you
know
if
we
simply
just
link
to
a
Jenkins,
build
and
say
this
failure
that
doesn't
really
help
someone
coming
along
later
on,
you
know
actually
having
a
bit
of
analysis
there.
That
says
this
is
it's.
You
know
an
error
snippet.
You
know
potential
link
to
the
code
where
that
chest
is
failing.
You
know
having
having
as
much
information
in
specifics
as
possible,
instead
of
just
a
link
to
a
Jenkins
failure
would
help
someone
I
believe
coming
along
later
on
to
attack
these
problems.
All.
B
A
B
Yes,
I,
don't
know
what
to
say
other
than
that.
You
know
our
integration
test
need
attention
as
we
see
issues
and
it's
yeah.
It
just
needs
attention
even
better
if
we
could
do
more
to
like
that,
took
them
out
to
make
them
faster
or
more
in
parallel
or
whatever,
but
just
squashing.
The
random
issues
is
the
first
priority.
Yeah.
A
So
if
we,
if
I
see,
you
know,
I'll,
be
happy
to
take
that
on
that.
If
we
see
a
recurring
failure
in
a
Jenkins
build
and
it's
and
I
think
Travis,
you
have
the
most
knowledge
about
which
ones
you've
seen
and
you
know
which
ones
are
occurring
because
you've
kept
a
close
eye
on
them,
but
I'm
happy
to
follow
up
with
some
of
that
legwork
to
you
know,
track
them,
organize
them
and
get
things
you
know,
information
consolidated
so
that
we
can
efficiently
tackle
them.
So
I'll
do
that.
A
If,
if
you
help
me
kind
of
notice,
which
ones
are
you
know,
failures
that
we
should
be
followed
tracking
like
that,
is
that
cool
sounds
great
thanks.
All
right,
I
got
two
legs
and
I
can
use
him
leg.
Work
leg
work
all
right,
so
that's
that
we
talked
about
this
issue.
I
will
follow
up
on
that.
One
and
Alex
is
gonna.
Send
me
to
debug
logging
patch
I.
Definitely
that's
important
to
me
because
of
showing
showing
Dimitri
and
those
guys.
Some
love
is
important.
They're
they're,
special
people
in
my
heart.
A
All
right
at
the
Mineo
integration
tests
that
Tony
had
started
a
couple
months
ago
have
not
been
touched
in
a
while,
and
a
few
weeks
ago,
Tony
mentioned
that
he
would
try
to
look
at
it
that
weekend,
but
didn't
so
I
painted
Tony
again
about
this,
and
if
they,
if
they
don't
get
touched.
If
we
can't
get,
you
know
Tony
to
take
a
look
at
these.
A
Then
I
might
take
that
over
just
to
get
those
to
the
finish
line,
because
I
would
like
some
integration
coverage
on
video,
especially
as
it's
getting
adopting
more
usage
as
well.
I've
song,
I've,
seen
The
Container
downloads
for
Mineo,
or
a
little
bit
more
popular
than
cockroach
right
now.
So
it
seems
like
some
folks
are
using.
It
it'll
be
good
to
have
integration
test
coverage,
but
if
anybody
talks
to
Tony
give
him
a
little
poke.
B
So
I'm
this
one
needs
some
more
view.
Alex
is
taking
a
look
at
it.
Thank
you.
One
thing
I
just
wanted
to
discuss
here
is
I'm,
seeing
an
upgrade
test
that
actually
has
helped
catch.
Something
I
feel
like
there's
an
issue
here.
I
did
get
a
green
build,
but
the
upgrade
test.
What
it's
doing
is
before
the
upgrade.
B
A
B
Well,
there's
the
test
has
a
timeout
after
15
seconds
to
kill
the
coop
cuddle
command
and
if
it
fails
in
the
there
was
writing
a
coop
cuddle
command
to
write
to
the
pod
or
read
from
it,
so
that
basically
just
times
out
after
15
seconds
and
we
kill
the
command
like
it.
Hyung
trying
to
talk
to
the
Mons,
well,
I
think
it's
doing
do.
A
We
have
any
instrumentation
because
I
know
that
we
do
a
cube,
cuddle
exec
to
perform
the
read
write
in
a
pod.
Is
there
any
instrumentation
there
like?
Is
it
the
amount,
that's
failing
or
is
like
that
pod
already
running
and
we're
you
know:
what's
the
mounting
you
know
attaching
process
is
not
path,
is
not
being
executed,
or
do
we
look
at
the
table
logs
for
that
pod,
so.
B
C
B
A
It
sounds
like
it'd
be
interesting
if
you
know
inside
the
pod
there
you
know
that
client
that's
trying
to
perform
that
reader
right.
If
there
was
some,
you
know,
debug
logging
at
that
stuff
level
to
see
you
know
if
it's
trying
to
do
like
a
you
know,
mana
hunt
like
looking
for
Mons
or
you
know
where
exactly
it's
taking
its
time.
That
might
be
interesting
to
find
out.
Cuz
I.
Doubt
we've
seen
that
yet
right,
yeah.
D
For
that,
it
would
need
to
take
a
look
at
kernel
messages
of
damask,
because
in
a
demon
Amelie
it
will
show
if
it's
what
is
currently
doing
kind
of
in
point
of
like
or
like
month.
One
a
is
failed
now
and
I
need
to
search
a
new
one
or
I'm
connected
to
osts
with
dead
IP,
and
now
it's
gone
or
something
it's
in.
The
damask.
B
A
Yeah
I
Travis.
Another
thought
too,
is
that
general
approach
live
because
it's
kind
of
like
a
big
abstraction
layer
where
you
know
do
a
cube,
cuddle
exact
that
which
kind
of
loses
some
observability
for
what's
actually
being
executed.
You
know
that
the
test
just
says:
cubicle
exec
hanging
I
have
no
idea.
What's
going
on
in
15
seconds
later,
I'll
kill
it.
A
E
D
E
D
C
A
Blaine
in
general,
I
think
that
you
know
poor
requests
like
this.
That
kind
of
gives
an
idea
of
what
the
changes
that
are
being
made
and
then
kind
of
the
specific
feedback
that
you're
interested
in.
That's
a
really
good
idea,
I
think
that's
a
great
practice
and
will
lead
to
you
know
better
collaboration
as
well.
So
this
is
a
great
model.
I
think
that
you
did
here
blade
well.
E
B
D
D
A
D
D
I
get
both
your
points
from
Jared,
but,
on
the
other
hand,
I'm
like
I'm,
seeing
it
coming
at
people
will
be
like
here,
a
crew
working
at
copy
and
paste
copy
paste.
Why
it's?
Why
isn't
it
working
and
because
of
that
work?
Because
how?
Where
should
you
add
the
comment
in
the
yeah
mo?
Wouldn't
be
enough?
Because
there's
like
I,
think
like
fifty
percent
going
to
ruqaiya
and
using
those
manifests
shown
there
and
like
50
percent?
A
I'm
thinking
after
all,
this
discussion
here,
I'm
thinking
that
maybe
Travis
and
I
can
let
go
of
the
you
know,
conciseness
and
cleanliness
of
the
dock
and
air.
You
know
go
towards
the
side
of
ensuring
correct
functionality.
So
just
you
know,
including
including
the
whole,
like
all
the
also
that
the
are
back
that's
necessary
as
well.
My
let's
I
think
we
probably
just
do
that
and
move
on
yeah
no.
D
B
Something
I
think
that
would
be
helpful
with
this.
Is
that
to
some
guide
the
user
just
to
look
at
the
CRD
part
of
the
dock,
you
know
just
say:
hey
here's,
where
the
CRT
changes
are
the
other
stuff,
yeah,
just
kind
of
ignore
that,
and
so
my
last
comment
there
that
at
the
bottom
of
the
screen
there
was
trying
to
accomplish
that
like
hey
what
if
we
added
this
to
comment
blocks
to
say:
here's
where
the
CRT
is
here's
some
other
kind.
D
A
Makes
sense,
and
then
a
quick
question
too
about
this
is:
does
the
ordering
is
the
ordering
very
essential
in
the
snippet
like
if
the
cluster
C
Rd
is
defined
in
the
mo
before
the
service
account
is
defined,
to
set
get
reconciled
or
and
to
get
to
a
consistent
state
with
the
kubernetes
api
server,
or
does
that
break?
I.
D
Can't
side
with
100%
but
I
think
at
least
if
you
have
multiple
files
filed
so
I,
think
it's
kind
of
applied
to
do
it,
because
if
you,
you
know
like
the
free
dashes,
it's
like
a
new
file
for
you
know,
depending
on
how
its
interpreted,
because
if
you
have
multiple
files
and
you,
for
example,
created
it
use
a
new
namespace
in
those
yellow
and
other
manifests,
and
your
namespace
yeah
Mel,
for
example-
well
begins
with
n.
You
would
need
to
name
it
0,
0
name
space
or
something
that
it's
created
as
the
first
thing.
D
A
A
B
D
A
D
There
we
go
so
today,
like
four
hours
ago,
a
user
King
forward
and
I
was
kind
of
looking
into
as
far
as
I
understand.
Right
now,
by
looking
to
implement
loose
front
file
system,
ISO
operator,
Farooq,
I
kind
of
just
said,
like
I
also
mentioned
you
guys,
but
you
didn't
respond
so
yeah
I
just
wanted
him
to
like
create
an
feature
request
on
a
issue
on
the
github
and
maybe
even
trying
it's
a
community
meeting,
but
he
didn't
have
time
to
join
it.
D
D
A
Yeah
that
I
don't
know
too
much
about
about
the
luster
with,
except
for
my
experiences,
if
the
supercomputer
conference
in
Denver
last
year,
but
that
might
be
interesting
to
be
able
to
manage
that
or
deploy
that
so
yeah.
It
sounds
like
they'll.
Don't
follow
up
with
opening
an
issue
and
we'll
have
some
discussion
there.
Yeah.