►
From YouTube: 2018-10-09 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I
think
you
need
to
okay
I
stopped
well,
let's
hope
this
will
work.
My
network
connections,
kind
of
flaky
I
see
your
terminals
here.
Alright,
so
thank
you
for
putting
me
ahead
of
line
to
present
these
and
sorry
for
the
time
constraint.
It's
really
unfortunate
that
we
were
one
of
the
two
lessons
I
have
is
exactly
the
same
hour
as
this
meeting.
But
what
can
you
do
so?
B
This
is
a
cassandra
operate
or
demo
I'm
going
to
present
cassandra
operate
on
creating
a
cluster,
a
Cassandra
cluster
from
scratch,
and
this
is
the
first
demo
in
the
series
of
demos,
I'm
hoping
to
be
doing
it's
two
weeks
demonstrating
a
new
release,
I'm
hoping
to
do
a
tick-tock
like
kind
of
release
model.
So
now
I
present
a
new
feature
next
time.
I
present
some
tests
and
bug
fixes
for
this
feature,
as
well
as
a
new,
a
new
feature
and
like
that.
B
B
Okay,
so
now
I'm
gonna
start
the
demo
and
let's
first
of
all
talk
about
what
you
see
here.
So
on
the
left
side
on
the
left
terminal,
you
see
my
workstation
PC
and
on
the
right
side,
you
see
the
kubernetes
master
of
the
four
cluster
node.
As
you
can
see,
this
node
has
four
clusters,
for
at
least
Glasser
has
four
nodes.
Sorry
and
first
of
all,
let's
start
a
and
on
the
bottom
side
you
have
the
current
action
we're
taking.
B
B
Okay,
so
I
got
research
because
all
of
these
things
are
deleted
and
now
I'll
demonstrate
that
my
cluster
is
indeed
empty.
This
set
of
commands
is
to
demonstrate
that
the
cluster
is
empty.
I
have
also
bundled
those
in
the
little
script,
so
you
don't
have
to
write
them
all.
So,
if
I
do,
if
I
run
the
script,
what
we
see
here
we
see
we
have
three
namespaces
default,
cube
public
and
coop
system
all
default
namespaces.
We
have
three
pods
from
this
demon
set
the
local
volume
provisional.
So
the.
C
B
Blue
provision
there
allows
me
to
use
local
mount
points
as
persistent
volumes
from
kubernetes.
Basically,
it's
the
new
way
to
use
the
local
storage
and
it's
an
upgrade
from
cost
path.
So
now
that
we
see
that
everything
is
empty,
first
of
all
we
want.
First,
we
want
to
register
our
custom
research
definition
and
also
we
want
to
start
a
new
stateful
set
which
will
run
our
operator.
B
B
B
This
is
our
service
account.
This
is
our
classroom
binding.
This
is
so
some
you
know,
boilerplate
Arabic
stuff,
so
you
know
greater
can
create
boards
and
services
and
stuff
like
that.
And
finally,
this
is
the
stateful
set.
Our
operator
will
running
usually
if
they
run
the
owner
rate
or
C
deployments,
but
it's
better
to
londonium
stateful
set
because
you're
guaranteed
that
only
one
instance
will
be
working
at
a
time
and
you
don't
have
that
guarantee
with
deployments.
B
D
B
Correct
yeah
you're
totally
curriculum
that
one.
However,
we
can
discuss
with
this,
maybe
at
a
later
time,
but
since
we
don't
have
any
storage
on
the
stateful
sale,
yeah,
okay
I
see
what
you're
saying.
Maybe
it
isn't
it
better,
but
we
can
discuss
this
little.
The
dev
channel
I'm
open
to
doing
this
another
way
or
adding
the
literal
axiom
feature
to
the
operator.
So
this
is
the
camel
we
will
be
applying.
So
we
do.
B
And
all
of
those
things
were
created,
we
show
that
then
we
show
that
the
operator
stateful
set
was
indeed
created
in
the
namespace
through
Cassandra
system
and
after
that
the
next
thing
to
do
to
create
Cassandra
cluster
is
create
a
new
instance
of
clusters
here
Dean
before
we
do
that,
so
we're
going
to
start
watching
everything
on
human
air
in
our
cluster.
So
let's
do
control
you
know.
B
Start
watching
and
then
let's
take
a
look
at
the
yamen
we're
applying.
This
is
a
very
basic
Cassandra
class
turn
it
will
be
creating
in
the
group.
Names
pay
a
row,
Cassandra
namespace.
It
has
the
name
group
Cassandra,
it's
a
three.
It's
three
eleven
three
person
it
will
contain
a
data
center
named
us
is
one
and
the
rock
named
West
East,
one
name.
The
rat
will
have
three
members
and
for
storage,
we'll
use
a
pregnant
local,
persistent
volume
or
with
49
gigs
of
storage
and
for
resources.
B
B
Sorry,
fish
first
I
need
to
create
the
namespace.
The
Cassandra
names
place
where
people.
E
B
D
B
Headless
service:
this
is
the
service
a
client
will
connect
to
in
order
to
be
routed
to
a
shelf,
because
Thunder
instance
and
finally,
we
will
see
that
for
each
pod
the
operator
will
create
a
cluster
IP
service
which
has
a
virtual
IP
and
will
serve
as
this
pods
static
IP.
So
we
see
the
first
instance
is
ready
and
the
cluster
and
the
static
IP
part
is
necessary,
as
we
also
say
in
the
design
document,
because
this
way
Cassandra
can
deal
with
node.
B
Let's
do
a
control
described
on
the
Cassandra
on
the
class.
Object
will
be
created,
as
we
can
see
the
operator
posts,
events
on
the
only
class
or
objects
and
events,
our
debugging
mechanism.
That
is
meant
to
help
users
understand
well
probably
with
what
they
have
done
wrong
or
what
is
the
problem
then
take
steps
to
resolve
it
and
main
core
components.
Objects
also
use
this
approach.
B
Now,
then,
we
see
that
our
placer
is
ready.
So
let's
demonstrate
first
of
all,
but
Cassandra
indeed
uses
virtual
IP.
Now
what
this
command
does
is
it
will
go
to
the
first
instance
of
the
stateful
set
and
execute
not
status
notice.
That
no
2
is
a
CLI
2
into
G.
You
stop
two-way
Cassandra
cluster.
So
if
I
run
this
command.
B
B
B
D
E
B
B
Then,
in
these
new
key
space
we
create
a
new
table
called
map
which
behaves
like
a
map.
It
has
a
key
and
the
value
then,
in
this
table
lets
you
know,
insert
a
new
value
and
select
everything
to
see
that
indeed
it's
working
and
we
can
recap
values
from
the
database.
And
yes,
it
seems
that
indeed
it
is
working.
So
at
this
point,
if
nothing
goes
wrong,
you
have
a
fully
functional
Cassandra
database.
So
this
was
the
demo
for
me.
Cassandra
cluster
creation
and
you.
B
G
B
B
Yeah
I'm,
hoping
I
will
be
able
to
commit
that
in
the
next
one
or
two
days.
This
structure
I
have
followed
this
quite
a
bit
different
from
the
other
operators.
I
use
a
work
queue
to
do
all
the
events
and
in
formulas
and
stuff
like
that,
but
you
yeah,
you
know
I
I,
not
so
what
the
procedure
is
for
the
code
review.
If
you
can.
D
B
I've
seen
that
this
discussion,
and
actually
more
or
less
all
the
approaches
that
use
it
work,
you,
you
know:
vs
DK,
you
builder,
and
maybe
the
runtime
library
I'm,
not
too
sure
about
that.
But
all
of
them
have
a
very
common
approach
where
you
just
define
your
x
and
your
sync
function
and
told
the
other
stuff
on
your
plate,
you
don't
care
about
them.
Yeah.
D
The
the
interesting
thing
about
it
is
the
for
the
controlled
runtime
is
they
also
manage
a
queue
where
all
informer
events
come
into
it
and
then
get
you
know,
removed
from
it
and
get
back
yes,
and
so
so,
let's,
let's
have
a
discussion
offline,
it's
probably
worth
using
a
similar
approach
across
all
the
operators
in
raqqa.
At
some
point.
Yes,.
A
A
G
Good
question
I
wanted
to
test
on
the
weekend,
but
basically
it's
still
missing
yeah
well
at
least
some
more
functional
tests
on
my
unit
test
and
possibly
also
if
you
want
to
like
100%,
have
107
codes
have
to
take
coverage
to
have
integration
tests.
It
is.
A
A
G
A
G
G
B
Sounds
that
sound
good
I
am
hoping
to
be
able
to
make
a
PR
in
the
next
one
two
days.
Would
you
prefer
that
they
make
a
PR
now
and
the
glue
tests
later,
or
would
you
rather
prefer
that
I
do
it
all
in
one
place?
I
was
thinking
that
I
would
follow
the
model
of
new
flute
new
feature
then
test
for
the
previews
and
one
new
feature,
and
you
know
something
like
this
yeah.
A
I
definitely
personally
strongly
recommend
that
you
know
before
anything
gets
merged
into
master,
that
it
has
accompanying
tests
with
it.
You
can,
you
know,
feel
free
to
open
a
PR.
If
you
want
to
start
talking
about
some
of
the
implementation-
and
you
know
you
can
add
tests
to
that
PR
when
the
test
isn't
ready,
but
I
think
that
we
do
not
want
to
merge
anything
into
master
without
accompanying
tests,
because
then
that
would
just
be
debt
that
may
never
be.
A
A
B
A
B
A
F
I,
don't
know
of
anything
either
there's
a
potential
where
we
talked
about
letting
the
1.11
client
stable
and
see
why
the
master
consider
backporting
it,
but
at
the
same
time
I
haven't
heard
of
a
real
demand
for,
for
that
or
nobody's
reported
the
issues
with
the
the
volumes
being
reformatted
recently.
I
know
that
that's
still
an
issue,
but
unless
there's
a
real
need,
I
don't
know
that
we
need
to
push
for
that,
because
it
was
a
big
code.
Churn
update
the
client,
yeah.
F
A
A
You
know
when
it's
once
again
same
theme
here
is
that
we
have
a
lot
of
items
in
here
and
a
lot
of
items
without
owners
and
we're
purposely
aggressive
about
this
milestone
in
terms
of
including
items
with
the
full
knowledge
that,
if
they
don't
get
owners,
then
you
know
we
can
remove
them
from
the
milestone
as
needed.
Are
there
any
issues
in
this
milestone
here
that
we
have
concerns
about
or
that
we
want
to
raise
to
discuss
right
now
in
this
community
meeting.
F
E
Yeah,
so
we
have
the
initial
key,
that's
back
so
basically
we
have
a
delivery
schedule.
The
first
phase
is
for
cooperate
at
112,
which
just
came
out.
We're
going
to
suppose
the
way.
That's
where
you
are.
If
you
already
deployed
the
CSI
drivers,
we're
going
to
support
you
actually
do
facilitate
some
of
the
functionalities.
E
The
CSL
driver
has
basically
the
if
you
are
loosed
among
or
just
more
failover,
unless
the
SI
bottoms
can
pick
up
1080
small
from
the
place
that
redirect
the
the
managers
are
basically
the
sequence,
so
that's
P
is
under
review
and
I've
tested
it.
So
once
you'll
fail
over
the
man
and
the
PP
we
offender
new
among
address
without
sin
problem,
the
second
phase,
once
we
have
the
CSI
drivers
jewelry
for
production,
we're
going
to
deploy
them
through
look.
E
Long-Term
P
I
is
to
have
look
for
it's
a
deployed
of
CSI
driver
arrest.
Now
that
the
appointments
is
pretty
marches
on
the
CSI
job
itself
as
a
deployment
process,
how
you
deployed
or
other
components,
if
you
don't
trip
like
that
way,
then
you
can
like,
unless
the
operator
deployed
advisor
in
the
future.
F
A
Think
you
know
eventually
we're
going
to
we're
going
to
need
to
get
to
a
point
where
we
start
calling
some
of
the
issues
that
are
in
the
0.9,
milestone
and
kind
of
start
targeting.
You
know
the
smaller
set
that
you
know
we
know
have
owners
and
that
we
want
to
ship.
You
know
at
a
certain
quality
level
at
a
certain.
You
know,
time
frame,
I,
don't
think
we're
necessarily
there
yet,
but
that's
something
to
be
thinking
about
about
when
we
would
want
to
start
targeting
getting
a
0.9
release
house.
F
Agreed
yeah
and
if
we
think
about
timeline,
I
mean
we
ship
toward
a
date
in
July
right.
So
we
November
is
kind
of
in
my
mind
when
we
really
should
target.
So
if
we
think
about
a
month
out
being
ready
and
if
we
think
well
in
the
u.s.,
we
have
Thanksgiving.
That's
those
people
down.
So
if
we
think
about
the
third
week
in
November,
potentially
no
in
my
mind,
that's
kind
of
a
soft
target
that
I'd
like
to.
A
F
Yeah,
that's
a
great
question
in
my
mind
and
I'm
talking
to
to
say
to
you
about
it.
What's
sort
of
what's
left
for
stable,
there's
really
two
features
that
that
come
out
and
you
know
some
other
bugs
I'm
sure,
but
the
features
are
really
the
versioning
decoupling,
which
is
almost
done
like
in
the
next
week.
We
should
be
done
with
that,
thanks
to
Blaine's
help
there
and
then
the
second
one
is
calling
set
volume
to
provision
the
OSDs
instead
of
doing
all
the
partitioning
as
part
of
the
route
code
base.
A
E
A
F
A
The
two
main
things
you
think
that
are
needed
for
4,
0,
90
and
moving
to
staple
what
about
upgrade
supports
I
know
that
there
was
kind
of
some
four
fortunate
progress
on
that
by
kind
of
moving
some
of
the
the
managing
controllers
of
stuff
components
to
two
deployments
yeah,
so
that
some
of
that
got
a
little
easier,
I
think.
But
there
I
don't
I,
don't
remember
any
larger
scale,
focus
on
automation
of
managing
versions
and
upgrades
through
the
operator
itself.
What
do
you
think
about
that
in
0.9
time
frame
and
stable
expression
so.
F
I
believe,
third,
that
issue
of
handling
upgrades
is
in
the
to
do
column
here
somewhere.
But
so
my
thought
around
upgrades
is
that
in
nine
timeframe
it
will
be
fairly
straightforward
just
to
automatically
update
all
of
the
deployments
to
a
new
version.
So
we
just
you
know
the
mono
SD,
MDS
and
rgw
manager.
F
Anyway,
those
can
all
be
updated
automatically
to
a
new
version
and
then
I
want
to
make
it
pass
at
the
upgrade
guide
and
say
well,
maybe
there's
still
some
some
upgrade
step,
but
once
we
do,
the
basic
deployment
updates,
I'm,
not
sure
there'll,
be
a
lot
left
to
the
upgrade
either.
As
long
as
the
upgrades
follow
a
standard
template
and
there
aren't
special
cases
to
worry
about
that,
that's
kind
of
those
sort
of
things
we'll
definitely
extend
beyond
or
that
night.
F
A
F
G
A
G
A
G
So
it
might
be
a
problem.
I
still
have
to
verify
it,
then
fine,
but
in
general
it
would
be
good
to
see
us
to
upgrade
the
how
version
in
general.
Do
you
know
like
at
least
something
more
up-to-date
like
not
the
latest,
so
food
at
11
or
something
but
226
is
like
five
six
versions,
depending
on
which
you
take
us.
The
latest
version,
oh
yeah,.
A
So
I
guess
to
me:
there's
two
parts
of
this
here:
one
is,
you
know,
using
a
newer
version
of
helm
in
the
scenarios
that
we
control
what
the
version
of
home,
and
that
would
be
like
integration
tests
or
mini
cube
developer
environment.
You
know
that
one
I'm
all
for
you
know
upgrading
to
a
later
version
of
helm.
That's
perfectly
fine,
but
then
there's
an
entirely
different
scenario
for
other
people's
clusters.
A
G
G
A
I'm
also
curious
because
I
don't
know
myself,
I
don't
have
much
background
or
expertise
in
this
I'm
curious
as
to
what
sort
of
additional
feature
usage
helm
supports
of.
You
know
only
use
this
feature
in
my
chart
here.
If
you
know
you
support
it
and
don't
just
die
and
fail.
If,
if
you
know
that
running
helm
doesn't
support
this
feature,
so
lots
of
research,
they
would
need
to
do
I
guess.
Yeah
cool
sounds
good,
Travis
Tom.
Anybody
else.
Did
you
have
thoughts
on
helm
versions.
D
A
F
A
A
F
I
actually
added
that-
and
it's
just
a
small
thing
to
make
note
of
I,
don't
know
that
we
need
much
discussion,
but
Mickey
Mike
has
actually
I
open.
This
pr2,
you
know,
add
1.12
to
the
Jenkins
test
and
we
just
need
to
add
it
to
the
Jenkins
config
so
that
it
can
actually
run
the
PR
he
just
opened
its.
It
was
just
failing
right
now,
I'm
taking
a
look
at
that
later.
I.
A
A
H
F
A
A
You
too,
because
this
requires
changing
the
Jenkins
config
I
believe
like
inside
the
Jenkins
admin
UI.
Okay,
that's
one
step,
and
then
the
other
step
is
that
if
there's
changes
to
the
Jenkins
file,
they
get
or
get
pulled
into
the
build.
As
part
of
the
pull
request,
then
that
the
author
of
the
pull
request
has
to
just
be
a
contributor:
they
do
not
need
Jenkins
admin
access.
They
just
need
to
write
access
to
the
repo.
Now.
F
F
A
So
you'll
follow
up
on
adding
the
window
of
agents
and
then,
if
we,
if
we
happen,
if
we
see
this
poor
request,
not
using
the
1.12
which
I
guess
it
must
have,
because
that's
why
it
was
failing.
You
said
so:
I
guess
the
Jenkins
file
changes
were
being
honored.
Let's
see
how
this
goes.
Okay
and
it's
a
speaking
of
kubernetes
versions
in
mini
cube,
did
have
have
you
seen
this
oh
I'm,
not
trying
to
drag
the
ticket.
F
E
This
once
but
I
think
I
didn't
I
mean
II
feel
like
they
will
find
I.
E
A
C
H
We
tried
multiple
file
systems
and
data
couple
times
and
right
now,
I
guess
multiple
systems
are
working,
but
it's
not
recommended
and
I.
Think
I
talked
to
somebody
in
slack
and
I
was
suggested
to
use
single
file
system
but
create
multiple
like
for
first
level
folders,
but
Luke
is
not
support
in
providing
credentials
and
there
is
a
ticket
open
for
that
for
a
couple
months
now.
So
what's
the
recommended
way
to
create
a
shared
file
system
for
users?
H
F
F
F
H
It
seems
like
yes,
so
if,
when
you
say
requesting
the
persistent
volume
and
oblique
and
just
fit
suffice
credential,
that
would
be
great.
Okay,.
F
C
H
D
H
Okay,
so
that's
the
biggest
problem
occur
now.
Second
biggest
is
that
nodes
pain
like
the
parts
that
map
RVD
work
volumes,
sometimes
King,
and
basically
the
only
way
to
and
not
die
and
the
only
way
to
fix
it
just
to
reboot
the
node
and
I
figured
why
it's
happening
when
users
are
setting
memory
limit
and
then
main
process
is
getting
like
exceeding
this
memory
limit
and
is
getting
unkilled.
G
Because
the
problem,
basically,
is
that
you
need
something
to
tell
probably
yeah.
Let
me
feel
like
that.
Hopefully
system
D,
that
you're
using
tell
the
system
D
before
cubelet
is
stopped.
We
basically
differently
put
the
network
needs
to
be
still
online.
The
network
of
the
host
and
the
the
container
the
Sdn
container
needs
to
be
running.
D
G
Yes,
it's
it's
well
already
is
doing
what
it
should
it's
blocking
until
I
Oeste
and
as
we
said
here
without
network
at
can't
you
Dao
so
I
said
you
would
probably
well
run.
I
talked
a
stop
on
any
every
container
or
but
I
said,
stop
Kuebler
tensed
up
all
the
containers
and
then
run
unmount
on
every
obd
device
before
the
whole
system
shuts
down
or
mall,
as
I
said
to
the
networks
were
like
on
chairs.
A
You
know
just
for
one
more
thing
to
look
into
here
is
so
if
the
pod
is
getting,
as
you
know,
killed
the
because
it's
you
know
over
running
the
memory
limit
that
it
was
put
on
it.
I
am
kind
of
surprised
that
the
you
know
attach
or
detach
or
flow,
doesn't
get
kicked
into.
You
know,
perform
the
detach
and
unmount
successfully
to
make
you
know
clean
up
things,
I'm
kind
of
surprised,
that's
does
get
kicked
in,
so
there
may
be
an
issue
on
that
path
somewhere
that
might
need
you
know
a
grander
investigation,
so.
H
As
I
say,
the
problem
is
that
parts
are
run
in
several
processes
inside
and
if
main
process
is
killed,
like
depends
on
our
biggie
can
be
mounted
by
main
process
or
some
gel
process
and
kill
is
killing
the
process
by
consuming
most
memory,
and
it
can
be
first
process
or
child
process.
And
basically,
if
child
processes
Mountainair
bt,
but
parent
process
is
killed,
then
child
becomes
a
zombie.
H
D
H
A
H
D
D
F
D
D
A
Yeah
Dimitri
I
would
go
ahead
if
you
can
to
open
up
an
issue
in
the
Rick
repo,
so
we
can
kind
of
like
have
some
discussion
about
this
or
other
people
from
the
community
that
aren't
on
this
meeting
now
can
chime
in
with
their
expertise,
and
we
can
kind
of
track
some
workarounds
or
solutions
for
you.
One
thing
that
you
know
that
I've
really
appreciated
about
you
know
the
usage
that
you
guys
are
UCSD
in
the
Pacific
research
project
that
you
guys
have
been
doing
with
the
rook
for
multiple
versions.
A
Now
is
that
the
workloads
you
all
are
running,
or
you
know,
button
or
prefer
scaled
up?
You
know
heavier
workloads
and
so
running
into
you
know.
Some
more
of
these
issues
and
getting
more
vetting
is
bright
along
lines
with
the
push
and
0.9
that
we
want
here
to.
You
know
reach
stable
with
Ceph,
so
you
know
nailing
these
issues
and
kind
of
finding
solutions
for
them
and
getting
that
quality
and
reliability
up
is
very
important
and
you
know
we
get
that
feedback
from
you
all,
which
is
really
important.
A
C
So
we
have
nodes
pretty
much
every
University,
you
know
in
California
and
soon
to
be
all
of
the
country
available
for
doing
tests.
We're
also
right
now
I'm
pushing
up
against
some
AJ
issues.
We're
right
now,
deploying
our
first
six
node
H
a
cluster
for
a
different
project.
Inside
of
the
University,
it's
called
HP
wren,
it's
the
it's,
the
the
network
of
cameras
that
record
wildfires
from
the
mountaintops
all
across
California
in
Nevada
anyhow.
H
F
F
A
And
I
don't
know
if
anybody
on
the
redhead
side
there
has,
you
know
some
insight
or
some
an
update
about.
You
know
what
is
the
the
is
there
reliable
and
a
confident
way
to
deal
with
this
type
of
scenario,
where
you
know
you've
lost
contact
with
an
RB
d
client,
and
do
you
know
if
it's
safe
to
you
know,
kick
that
one
out
and
or
try
to
kick
that
out
and
you
know
start
a
remedy
clients
to
the
same
person
same
resource
on
a
different
location
in
the
cluster.
Is
that
a
safe
operation?
A
D
C
G
G
A
F
F
G
A
Way
to
do
that
also
Alex's
is
you
know
you
can
drag
ordering
in
here,
so
you
can
order
a
tickets
up
higher,
higher
and
lower
and
on
the
projects
ports.
That's
one
way
to
do
that.
Another
appropriate
label
could
be
like
production
blocking
you
know
or
just
needed
for
stable
or
something
like
that.
To
kind
of
give
an
idea
of
of
you
know
priority
from
that
that
perspective.
C
G
A
C
It
they're
just
their
minds,
are
blown
by.
You
know
what
they're
able
to
do,
because
what
you
guys
have
done.
That's.