►
From YouTube: 2018-04-10 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
A
A
A
B
C
C
B
C
A
The
Christian
and
his
team
is,
is
the
major
folks
that
I've
seen
affected
by
this
so
I
guess
we
can
continue
to
focus
on
making
sure
that
it
solved
for
him
and
his
team
in
his
environments
and
then,
if
we
happen
to,
you
know,
get
a
deeper
understanding
of
the
root
cause
for
you
know
why
it
affects
some
people
and
doesn't
then
that
would
be
good,
but
since
it's
going
to
be
fixed
or
is
fixed
apparently
in
1.10,
that
becomes
a
lower
priority.
For
me,
in
my
opinion,.
D
A
C
B
A
So
that's
so
Alexander,
so
you
and
I
talked
about
this
I.
Think.
Yesterday
there
was
on
slack.
There
is
a
user
discussing
an
issue
that
they're
running
into
that
originally
looked
like.
It
was
due
to
this
bug
here
about
the
helm,
job
that
waits
for
CRT
is
to
be
created,
but
after
further
investigation
it
was
from
a
different
issue.
So
I
don't
believe
that
we
need
to
go
ahead
and
release
for
that
particular
issue
on
the
this
issue
here
on
the
0.7
board.
A
C
C
A
I
totally
agree
with
that:
I
added
that
to
the
0.8
milestone
as
like
to
have
it's
not
in
I
added
it
to
the
project's,
not
the
milestone.
So
it
would
be
nice
if
somebody
could
pick
that
up
and
fix
that,
but
it's
I
don't
have
I'm
leaving
the
country
tomorrow.
So
it's
I,
don't
have
the
bandwidth
to
do
that,
that
for
sure
yeah,
but
we're
tracking
it
here.
A
I
added
to
the
project,
for
you
know
a
nice
to
have
and
honestly
committing
tour
for
the
milestone.
But
if
we
know
you're
forget
to
do
it,
then
that
would
be
create.
I'm
gonna,
add
a
comments.
You
also
cuz
every
when
I
was
looking
to
the
west
decode
earlier
today
that
there's
another
place
that
that
could
be
have
an
effect.
A
C
A
A
B
And
I
feel,
like
last
in
our
last
meeting
I
think
we
talked
about
shooting
for
having
the
otai
8
for
coop
con,
which
really
only
gives
us
two
and
a
half
weeks
for
getting
on
a
plane
and
heading
out
there,
and
if
we're
trying
to
get
some
of
these
bigger
items
in
before
then
we're
at
risk
for
those
two
on
that
timeframe.
So
your
yeah
your
question
as
far
as
making
traction
I
agree,
especially
with
all
the
things
in
that
you
do
column
there.
We
need
to
really
decide.
B
A
Yeah
the
biggest
thing
that
on
yeah-
let's,
let's
do
that
the
biggest
thing
that
I
am
aware
of
and
that
I
am
entirely
focusing
on-
is
supporting
multiple
storage
providers.
You
know
we
would
like
to
possibly
get
an
initial
implementation
for
one
of
the
smaller
or
easier
ones
as
well,
something
like
many,
oh,
but
it
would
be
nice
to
show
that
off
for
acute
con,
but
you
know
so
I'm
hoping
that
there's
something
very
concrete
is
ready
and
available
by
Copenhagen.
A
A
A
So
I
think
that
you
know
some
of
these
things
here.
The
sort
map
needs
to
be
updated
a
little
bit,
but
you
know
having
something
to
show
here.
Is
you
know
entirely
reasonable
and
still
in
the
plans
for
Kubik
on
the
having
a
actual
release?
That's
gone
through
the
full.
You
know,
release
pipeline
and
testing
and
you
know
vetting
and
all
that
is
less
likely
much
more
at
risk.
Yeah.
B
A
B
A
B
Yeah
I
agree:
we
need
to
get
some
of
these
things
like
one
OSD
per
pod.
I
think
is
a
critical
part
of
this
and,
if
we're
saying
that
needs
to
come
after
the
multiple
storage
back
ends
and
there's
also
sound
like
people
out
for
vacation
or
there's
no
way
that's
done
before
you
become
just
that
alone.
Yeah.
A
So
it's
it
sounds
like
that's
the
you
know.
I
would
like
to
have
things
completed
that
we
can
demonstrate,
but
not
necessarily
an
official
release.
So,
let's,
if
we
have
strong
objections
from
anyone,
then
we
can
raise
that
now,
but
it's
also.
That
would
be
something
to
talk
to
Bassam
about
as
well
to
get
his
opinion.
B
A
A
B
A
B
B
C
C
Yeah,
that's
that
seems
still
for
a
good
amount
of
people.
Like
you
an
important
point,
or
at
least
depending
on
how
I
see
it
depends
on
the
effort,
can
just
you
as
much
as
it
can
do.
To
put
it
like
that,
at
least
in
my
case,
for
someone
with
my
cluster
I
never
had
issues
with
the
months
and
because
I
know
I
need
to
maintain
a
quorum
and
well
and
if
one
fails
over
yeah.
A
Yes,
follow
up
the
roadmap,
so
let's
go
ahead
and
move
on
then
so
Dimitri
we
have
Demetrius
prepared
a
demonstration
on
how
next
sensor
edge
runs
inside
kubernetes,
which
is
gonna,
be
very
informative
for
us
so
to
meet.
You
I'll
go
ahead
and
pass
it
off
to
you
if
you're
ready,
yep
awesome,
let's
see
if
I
can
find
the
button
for
that.
A
E
So
here
we
have
a
council
I
prepared
some
yellow
files,
I'll
go
through
it,
real
quick
I,
don't
think
it's
going
to
be
more
than
one
system.
You
know
really
and
we're
going
to
talk
a
little
bit
about
how
we
discovering
disks
and
the
network
so
right
now.
This
question
is
really
it's
just
three
notes,
essentially
nothing
special.
It's
version
110.
E
On
each
cluster
pay
discs
and
they
already
pre-mounted
into
the
major
target
data
local
system
volume,
and
we
actually
support
both
file
system
and
also
block
devices.
Rogue
devices,
as
well
very
similar
book
house,
does,
is
a
boost
or
by
making
essentially
usage
of
SSDs
to
optimize
placements
of
the
net
data.
E
C
E
There
are
two
networks
here
created:
one
is
essentially
underlay
for
replicas.
This
is
essentially
a
back-end
for
the
storage,
and
it
is
again
we
can
live
stressed.
I.
Think
enough
that
the
backend
requirement
is
important,
not
to
say
that
if
you
do
not
do
back
and
they
will
be,
a
problem
is
performance
down
the
road,
so
the
back
end
or
CF
and
I
think
funny
contagious,
specifically
critically
important
in
fashion
as
well.
So
the
client
net
is
essentially
a
port
Network,
a
pod
network
and.
E
E
D
E
Here
we
defining
location,
we
define
the
network
interface,
you
want
to
use
for
the
backend,
also
defining
the
location
of
where,
from
inside
a
container
to
take
the
pre-mounted
disks,
and
we
then
basically
just
exposing
two
networks
and
to
the
come
to
the
target
containers,
so
that
will
appear
and
we're
just
trying.
Essentially
they,
the
actual
demon
and
few
sidecars
audit
trail,
which
essentially
aggregates
the
statistics
and
some
of
the
statistics
to
the
management
framework
and
watching
which
does
the
internal
coordination
are
for
flex
hash
tables.
E
So,
whilst
creating
what
it
does
it
magically
discovers,
those
locations
confuse
of
the
disks
automatically
builds
flex,
hash
tables
and
my
TP
in
the
so
there
is
no
like
in
case
of
C
embers
monitor,
which
does
essentially
the
monitoring
for
the
file
system
in
case
of
things
on
the
edge
we
don't
have
metadata
server.
Each
disk
essentially
has
a
slice
of
metadata
inaccurate
as
I
get
the
data
server.
So
if
we
look
for
when
we
are
it's
already
up
and
running,
we
have
a
containers
on
each
again
on
demon
and
to
side
cards.
E
E
E
E
E
What
you
see
here
is
essentially
a
view
of
the
image
cluster.
It
has
support
for
multi-tenancy
and
essentially,
what
we're
trying
to
do
here
is
use,
conceive,
parent,
a
framework
and
connect
network
isolation
with
a
storage
equation,
and
that's
what
I
want
cannot
go
up
to
them
and
just
to
show
this
off.
E
E
E
E
Just
receive
demo,
so
this
in
insurance
you'll
be
specifically
isolated
into
the
demo,
not
a
demo
channel
network.
So
let's
take
a
look
on
internals
of
this
tree
service.
Here
we
can
essentially
define
the
particular
notes
runnin,
but
also
we
can
select
and
pre-select
what
you
notice
has
to
serve
this.
C
E
E
Now
that
this
is
created,
essentially,
it
can
be
enabled,
but
it
can
be
enabled
also
from
the
count
sorry
here
what
I
gonna
do
are
going
to
show
you
the
yellow
files
in
Shalhoub
January,
like,
for
instance,
in
case
of
I
Scotty.
You
will
see
that
generated
under
I
scratch.
It
service,
initiate
the
service
and
started
up.
It
is
kill
out,
I
scuzzy
it
also
what's
interesting
about
this
car.
E
Their
progress
should
mention
is
that
all
Scotty
and
NFL
they
essentially
provide
a
way
of
building
highly
available
network,
and
this
is
not
exactly
what
Cooper
Nigel
does
is
it's
replica
set,
it's
actually
kind
of
single
single-digit
second
resolution,
each
a
configuration
it
can
be
built
in
the
virtual
IP
will
be
available.
This
will
provide
a
guarantee
fall
over
time.
These
are
single
digits
and
the
way
it
works
actually
have
active
and
passive,
and
the
punch
of
in
the
standby
mode
is
already
running.
E
C
E
E
So
we
need
to
expose
pretty
much
the
standard
demons
they.
Obviously
they
isolated,
they're
running
in
the
container.
So
there's
nothing
really
much
special
here.
The
s3
looks
like
this:
the
because
Asia's
three
implementations
fully
compatible
with
a
the
boyars,
obviously
because
of
the
the
way
we
working
recreating
the
chunks
and
we
splitting
the
chunks
across
the
cluster.
It's
fully
mutable
implementation
with
the
or
running
multi-part,
etc,
supports
polishes
full
spec.
E
C
E
Okay,
we
have
demon
running
here.
We
have
essentially
two
networks,
one
part
network,
just
168
0,
and
this
one
is
under
the
network
used
for
the
replicas.
As
you
can
see,
we
do
not
do
any
ipv6
management
on
underlay.
What
we
using
is
automatically
provisioned
clean
address
if
a
TV
6
is
enabled
it
is
a
must
to
have
so.
This
addresses
were
killed
and
magically,
so
we're
just
simply
grabbing
this
address,
and
this
this,
what
becomes
essentially
part
of
the
communication
channel
so
cursing
is
supposed
to
see
all
the
three
nodes
automatically
as
well.
E
D
Dimitri
is
demonstrating
here
is
a
single
tenant
accessing
the
back
end
and
we
could
do
multi
tenant
with
multi
tenant
logins.
But
the
problem
is
that
some
of
the
demons
we
use
don't
support
multi
demon,
and
these
are
demons
that
are
actually
commonly
used,
for
instance,
the
NFS
Ganesha
demon.
You
have
to
run
each
tenant
in
a
separate
container
because
Ganesha
does
not
know
how
to
log
people
into
different
authentication
domains.
D
E
Mean
yeah
I,
actually
I
got
conceived.
Working
I
wanted
to
show
to
the
people
that
hanky
fees
essentially
gives
us
isolation
on
a
pod
network
for
the
multi-tenancy
and
I
wanted
to
briefly
kind
of
show
you
guys
Royalton
give
does
for
us
like,
for
instance,
again
I
mentioned
before
that
default
map
was
our
port
Network,
and
this
one
was
our
cap
multi-tenant
network
with
tenant
demo.
So
if
you
look
at
the
TMC,
you'll
see
them
unless
demo.
So
what
we've
done
here?
We
kind
of
connected
this
networking
installation
to
the
storage
resolution.
E
Remember
I,
created
channel
demo
also
when
I
was
creating
a
cluster
and
importantly
to
mention
that
I
also
for
the
purposes
of
the
demo
set
up
a
policy
I
limited
that
key,
a
non-story
are
only
a
hundred
megabits
per
second
I,
then
create
a
demo
group.
I
I
added
this
policy
to
that
demogroup
and
there
is
also
a
solution
policy
which
are
not
enabled,
but
it
can
be
easily
enabled,
so
they
can
actually
control
the
flows
inside
the
channel
groups
right.
D
D
What
we
need
for
the
replication
that
work
can
ultimately
be
dealt
with
by
things
that
are
already
part
of
kubernetes.
What
we
need
for
from
the
tenant
networks
can
also
be
dealt
with
in
ways
granese
in
fact,
multiple
ways,
but
one
of
the
things
we
also
want
in
the
long
run,
is
the
ability
to
have
a
shared
storage.
Back-End,
that's
access
by
multiple
tenants
and
in
order
to
do
that
and
be
compatible
with
I
scuzzy
targets
and
NFS
in
particularly
the
Ganesha
and
FS
demon.
A
D
E
Yeah
but
to
some
degree,
this
isolation
already
provides
shooting
multi-tenancy,
so
by
the
way,
kaeleen
is
going
to
be
blogging
about
this
on
our
extended
github
that
I
Oh
website.
So,
if
you
guys
interested
can
learn
more
about
our
thoughts
on
multi-tenancy
and
in
pilgrimages,
please
take
a
look
and
let
me
continue
with
the
demo
after
the.
A
E
So
I
want
to
quickly
to
show
you
how
to
connect
a
particular
tenant,
the
cloud
storage
and
wisdom
networking
channel.
So
what
is
what
we've
done
here?
We
set
up
essentially
a
label
in
this
deployment
where
we
mentioning
the
game,
one
tenant
in
isolation
and
what
network
it
is
on
in
this
overlay,
Network
and
the
group.
While
this
is
executed
deals,
we
will
take
a
look
inside
the
that
container
and
you'll
see
what
how
the
network
is
laid
out.
So
let
me
run
it.
E
C
E
So,
what's
interesting
here
is
that
the
client
network
now
picks
up
their
192
162,
which
is
the
isolated
network
and
also
has
a
polishes
set
to
limit.
The
bandits.
I
will
now
go
ahead
and
create
a
demo
ball,
but
me
so
we
should.
Essentially,
this
is
how
the
application
essentially
can
use
this
multi
talents.
E
C
E
C
E
C
E
D
And
the
goal
here,
of
course,
like
Mickey
Dimitra,
mentioned
the
bandwidth.
What
you
want
with
a
shared
resource
network
is
where
you
can
put
limits,
aggregate
limits
on
the
bandwidth
for
any
one
tenant,
but
the
back
end
network
can
actually
burst
for
any
one
client.
It's
not
like
we're
building
separate
back-end
networks
that
are
each
stuck
in
that
one
and
slice
of
the
pota,
it's
a
common
and
it
can
pour
resources.
Just
like
a
bank
pools
funds.
D
E
B
Thanks
Dimitri
one
question
I
have
is
so
with
the
you
know,
as
integration
of
Brooke
and
there's
another
back-end
I'm.
Assuming
that's
the
interesting
work
that
where
do
you
see
that
rook
will
help
you?
The
most
I
mean
I
saw
some
things
where
maybe
you
know
you
have
some
things
in
config
map
and
configuration
where
those
could
be
CR
DS
there
were
handles,
but
what's
your
vision,
I
guess
for
how
work
helps
you
next
Center?
Yes,.
E
That's
great
so
rule
will
help
I
think
in
two
ways:
one
is
we
simplified
the
general
configuration
between
multiple
accounts
like,
for
instance,
I,
do
not
expect
any
user
to
learn
multiple
management
tools
and
by
providing
unified
interface,
which
will
be
essentially
she
mote
would
say.
If
you
talking
about
mimeo
or
chef
DW
own
niche
agenda
hs3,
it
will
be
kind
of
similar
experience
for
the
end
user,
so
the
user,
who
knows
how
to
operate
with
Luke,
will
be
easier
for
him
will
be
easier
to
essentially
get
started
with
Mineo
or
wisnicki
on
the
edge.
C
E
In
those
files,
essentially,
some
of
them
out
Auto
generated
during
the
creation
of
the
service
in
the
GUI
or
in
initial
I,
like
Francis
in
ADM,
know
if
we
can
have
services
here
at
least
etc,
and
they
can
be
automatically
activated
and
created
here
as
well.
But
this
is
still
requires
a
end
user
to
configure
a
back
end
and
the
back
end.
Configuration
is
not
straightforward,
so
like,
for
instance,
he
needs
to
go
ahead
in
typical
configuration
modify
this
file.
E
You
know
prepare
this
config
map
as
you're
writing
notice.
The
networking
back-end
is
another
complex
configuration
point.
So
if
we
let's
say
at
some
point
vision
root
can
solve
this
separate,
back-end
and
I
know
kubernetes
networking
group
actually
working
on
adding
custom
tools,
definition
for
the
network
so
soon
we're
gonna
have
ability
to
do
this.
In
writing.
Ink
and
more.
E
D
Here
the
goal
is
more
almost
more
to
create
a
convention
than
a
lot
of
code
that
all
these
things
could
have
been
done
with
kubernetes.
It
would
be
a
real
headache
for
each
storage
provider
and
each
tenant
each
each
service
provider
to
hear
all
these
out
having
a
roadmap
that
says
here,
this
is
the
template
on
how
to
plug
in
multiple
storage
backends
under
kubernetes
being
outlined
by
rook,
in
whatever
minimal
code
is
needed
to
support
that.
E
Correct,
like
forensic
guys,
take
a
look
on
this
description
right.
This
is
roadies
definitions
for
us.
This
is
just
extremely
player
boom
right.
We
have
like
a
lot
lot
more
parameters
here
to
manage
so
like
have
a
convention,
how
to
define
those
it
would
be
really
nice
and
so
that
whatever
the
user
wants
to
configure
the
back
end,
we
have
well-documented
mechanism
in
a
book.
Okay.
E
How
to
do
this
also
I'm,
looking
forward
to
he
block
local
pay
system
volume
going
from
alpha
to
beta
soon
I'm,
not
entirely
certain
that
it
is
scheduled
for
111,
but
I
know
in
110,
if,
at
least
for
me,
it
wasn't
working
correctly,
so
I
tried
the
block
local
piston.
What
I'm,
still
not
working
correctly
so
for
current
will
be
using
essentially
just
description
of
the
disks
like
that
and
essentially
be
forced
to
pass
privileged,
not
in
this
case
and
mount
def
directories
in
that.
C
D
D
E
D
C
E
One
of
those
we
have
this
notionally,
you
can
specify
that
your
own
device-
and
it
could
be
something
like
like
a
base
right
like
so
that
journal
divides
them-
would
automatically
be
used
other
a
flawed
device
for
the
GDD
etc.
Those
groups
can
be
built
either.
If
you
do
like
this,
so
these
sort
of
configurations
cannot
difficult
to
manage
and
I
think
group
needs
to
have
a
way
of
essentially
building
those
groups.
I
noticed
that
you
guys
scheduled
similar
work
for
0.9
and
I
think
this
is
something
would
be
really
beneficial
for
mixing
the
edges.
E
Drives
is
another
one
like,
for
instance,
the
drive
goes
bad.
How
are
we
gonna
handle
this
error?
We
need
to
provide
a
mechanism
to
blink
the
drive
and
essentially
so
it
will
show
up
for
the
end.
User
of
this
drive
needs
to
be
replaced,
etc.
Right,
so
all
that
is
something
I'm
looking
forward.
We
can
improve
this
repair
in
kubernetes.
D
A
So
yeah
thank
you,
Dimitri
for
sharing
this
demonstration
with
this.
That
kind
of
helps
understand
a
little
bit
better
about
you
know
what
the
experience
for
next
Center
is
I.
Think
the
next
steps
for
going
further
with
this
would
be.
You
know,
I'd,
like
one
pager
or
some
sort
of
you
know
short
design
about.
You
know
what
Brooke
would
specifically
need
to
do
to
help
with
the
deployments
and
management
of
next
Center
as
a
back
end.
So
you
know
looking
forward
to
integrating
over
some
of
the
specifics
in
that
forum.
A
E
Yes,
that's
definitely
our
next
step,
we're
looking
forward
for
your
work
being
integrated
and
maybe
even
on
on
a
branch.
When
you
can.
You
probably
can
ping
us
on
slack
on
the
next
meeting
to
say
the
status
on
this.
We
can
then
start
prototyping
this.
The
goal
for
us
was
to
have
some
prototype
running
before
the
Copenhagen
cube
cone,
but
that
will
be
just
a
prototype
to
just
to
demo
it
at
the
show.
If
you
will
do
that
in
time,
if
not,
then
we
probably
will
need
to
kind
of
move.
A
A
Alright,
so
the
agenda
doc
should
be
displayed
again
here.
The
last
thing
that
we
had
on
the
agenda
for
today
was
about
PRS
that
we
wanted
to
discuss
in
one
that
I
was
interested
in
was
about.
We
got
a
PR
yesterday
from
a
community
contributor
about
supporting
ipv6
I
when
I
took
a
look
at
it
in
just
a
couple
minutes
ago,
though
it's
it
looks
like
it
was
going
into
the
released
0.7
bridge,
which
we
don't
want.
A
We
wanted
to
go
into
master
instead,
so
we
need
to
change
that,
but
it
looked
like
the
the
basic
part
of
the
change
here
was
just
how
you
know
the
string
concatenation
to
define
an
ipv6
address
and
I
would
be.
I
was
surprised
that
that
would
be
they
don't
like
really
the
only
thing
that
was
lacking
in
support
for
ipv6
in
Brooke.
So
if
anybody
had
some
understanding
of
what
other
lurking
issues
there
may
be
for
ipv6,
that's
why
I
wanted
to
bring
it
up
in
this
form.
A
E
D
E
D
Real
problem
in
CNI
plugins
and
how
they're
enforcing
the
isolation
a
lot
of
them
are
have
done
it
in
ways
that
are
ipv4.
Only
others
of
them
are
better
in
weight
or
more
l2
oriented,
which
means
they
can
read.
They
they
just
support
ipv6
and
it's
just
a
matter
of
kubernetes,
realizing
that
it
can
do
it.