►
From YouTube: 2018-12-04 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
Yes,
alright,
so
you
also
uploaded
the
PR
from
the
scaling
down
operation.
It's
not
very
big,
it's
really!
It's
not
much
new
code.
So
since
the
coupon
is
also
coming
up-
and
you
said
you
may
want
to
play
them
a
bit,
you
know
if
we
could
have
that
theme
and
it
would
scale
them.
You
know
I
think
it
would
attract
more
users
because
they
right
now
there
is
not
any
software
out
there
that
can
scale
up
and
down
Cassandra.
A
B
D
B
So
yeah
this
is
the
issue
yeah,
but
Cassandra
operator.
B
So
I
keep
a
table
of
the
status
of
things
and
the
last
entry
is
this
came
down,
so
we
have
the
design
PR
and
the
implementation.
We
are.
The
implementation
PR
failed
the
integration
testing,
but
it
didn't
fail
because
my
the
Cassandra
declaration
test
failed.
It
was
a
random
error
on
1.8
the
others
passed
on.
B
A
F
C
F
E
F
A
F
Oh
to
clarify
there
so
the
so
it's
frustrating
because
it
only
fails.
Anarchists
only
fail
on
GCE
and
it's
failed
like
four
out
of
the
last
five
times
and
it
from
the
logging
you
just
added.
It
appears
that
the
operator
pod
is
going
away
because
we
can't
collect
its
logs.
So
it's
it
looks
unrelated
to
that
other
CRD
users
see
Rd
issue.
F
A
A
F
F
A
F
Maybe
one
more
thing
to
point
out
is
the
upgrade
automation
that
Blaine
is
working
on,
so
we're
there's
two
issues
for
that
there
in
the
board
997
at
the
bottom
of
the
review,
so
Blaine
has
a
PR
for
automating
upgrade
of
the
deployments
and
then
he's
working
on
the
documentation.
So
Blaine
do
you
want
to
summarize?
What's
where
you
are
with
all
that.
D
Think
the
I
guess
more
of
the
issue
that
I'm
seeing
is
hitting
like
the
time
crunch,
working
on
making
sure
that
the
deployments
of
like
the
Mons
and
manager
and
MD
essence
and
Renault
gateways
get
updated
on
just
regular
updates.
If
there
are
updates
and
those
are
failing,
integration
tests
definitely
was
working
before,
but
that
might
have
been
something
that
I
I
missed.
What
I
was
watching,
so
we're
trying
to
figure
that
out
today,
I
think
is
the
bigger
priority.
G
D
C
F
Sometime,
my
thoughts
are
on
timeline
are
well
we're
we're
really.
We
have
a
lot.
We
need
to
merge
to
get
this
in
and
I'd
like
to
look
closely
at
ordering
of
some
of
these
bigger
merges,
because
my
v1
conversion
will
potentially
have
merge
conflicts
with
some
other
things.
So
we
don't.
We
don't
have
to
talk
about
that
ordering
here,
but
as
we
merge
things,
I
I
won't
I,
don't
want
to
delay
the
v1
PR
I
feel
like
I'd
like
it
to
come
in
after
the
other
big
ones.
F
F
F
C
A
A
A
C
A
All
righty,
okay,
let's
keep
going
I
selfishly
added
these
here
in
front
of
everybody
else,
and
we're
not
going
to
talk
about
it
very
long,
but
we
just
open-source
this
project
about
an
hour
ago,
and
this
is
very
big
projects.
We've
been
working
on
it
up
bound
now,
so
there's
some
interesting
integration
that
we
can
do
with
rook
on
this.
So
I
think
that
there's
a
lot
of
excitement
and
a
lot
of
a
lot
of
cool
thing.
So
we
have
the
github
repo
is,
is
public
and
we
are
welcoming
contributions.
A
H
A
B
So
basically,
what
happened
is
Silverlight
3,
whole
kubernetes
stuff.
We
like
how
the
operator
worked
and
actually
did
things
that
are
very
difficult
to
do.
If
you
don't
have
something
like
giving
you
all
these
primitives,
so
yeah,
it's
to
like
scale
up
scale
down
Souls
have
to
happen
by
hand,
and
it's
very
tiring-
and
you
know
you
don't
have
all
these
cobras
goodness,
so
they
liked
it.
B
They
hired
a
consultant
and
essentially
they
want
to
do
their
own
releases,
and
that
is
logical,
and
that
is
totally
acceptable
because
they
want
to
have
control
over
when
they
release
and
they
may
want
to
release
fast
or
they
may
want
a
hot
fix
stuff
like
that.
So
what
they
decided
to
do
is
essentially
take
the
existing
code
as
a
base
and
then
mutated
to
bring
it
more
to
their
business
needs
so
yeah,
but
Kay
that's
kind
of
a
queen.
My
book
I,
don't
know
but
I'm
discussing
with
them.
B
Maybe
if
they
want
to
keep
Luke
in
this
and
have
their
own
have
their
own
copy
as
being
more
ahead
and
all
this
and
the
very
stable
stuff
will
be
in
the
rule,
so
they
can
also
leverage
the
integration
testing
and
they
don't
have
to
do
things
all
over
again.
So
yeah,
that's
something
that
I
wanted
to
say
ends
and
I
want
to
say
it,
because
other
search
providers
may
decide
to
do
the
same
thing
in
order
to
have
more
control
over
releases.
B
B
Me,
you
know
a
framework
all
ready
to
go,
which
is
it's
not
really
solve
problem
integration,
testing,
brunettes
everyone
does
something
and
Brooke
has
a
very
solid
framework,
so
yeah
I'm
presenting
what
I
believe
to
be
the
advantages
to
them.
I.
What
I
presented
is
a
solution.
Is
you
know,
I
told
them
you
usually
you
can
keep
your
own
repo,
so
you
can
push
races
and
I
synchronously
sync.
Those
changes
to
broke
yeah.
A
B
B
B
F
Thing
to
consider
is
that,
with
like
with
RedHat
and
SUSE
I
know,
there's
this
pattern
of
upstream
and
downstream.
Work
is
the
upstream
projects
and
at
Red
Hat
we
want
to
ship
the
downstream
project,
which
is
basically
you
do
the
work
upstream
and
then
you
fork
it
downstream
and
you
have
to
back
port
things
upstream
and
or
downstream
in
any
way.
It's
so
there's
a
there's,
a
pattern
there
I'm
not
as
familiar
with
the
downstream
side
of
it,
but
and.
F
B
B
F
A
B
F
C
A
A
C
B
G
A
I
I
So
for
this
demo
I
have
a
single
node,
open,
OpenShift
cluster,
and
we
have
an
engine
x
deployment
as
two
replicas
as
an
example
that
we're
going
to
protect
resources
for
in
our
alameda
has
two
pods
on
it.
We
have
alameda
AI,
which
is
our
prediction
engine,
and
also
our
operator,
which
watches
for
our
custom
resource
or
alameda
custom
resource
that
will
tell
us
which
pods
to
predict
resources
for
so
for
the
deployment
of
the
nginx.
It's
important
that
we
have.
I
I
So
that's
what
our
custom
resource
alameda
deployment
would
would
look
for.
We
have
a
selector
mesh
labels,
app
engine
X,
so
our
operator
will
look
for
this
resource
and
then
match
the
labels
and
then
predict
resources
for
the
engine
exports
and
then,
after
that,
up
there
we
set
that
up.
We
can
do
OC,
get
alameda
resources
and
we
should
see
that
alameda
and
then
this
will
find
all
the
this
will
find
all
the
pods
that
match
the
label
so
like
for
this
one.
I
I
I
We
recommend
and
do
this
much
cpu,
this
much
memory
538
548,
so
that
would
be
our
recommendation
that
your
operator
can
use
to
apply
or
I
can
just
use
the
the
raw
prediction
data
that
we
have
that
just
lists
out
all
the
all
the
predicted
usage
we
have
for
each
second
for
the
next
or
each
every
ten,
every
30
seconds
for
the
next
30
minutes.
So
this
is
for
memory,
and
then
we
have
this
for
CPU
all
right
and
yeah.
F
F
I
A
A
So
how
does
it
handle
like
a
if
one
of
those
engine
mix
pods
was
to
die
and
then
a
new
pod
came
up
in
its
place
and
it
still
got
the
same
label,
though?
Does
it
track
across
that
life
cycle
where
it
goes
down
and
comes
back
up,
yeah.
I
So
before
solution
would
like
automatically
detect
if
we
stopped
getting
like
metrics
from
a
pod-
and
it
would
just
like
it
would
just
delete
that
pod
off
the
list.
But
then
yeah
I
saw
we
just
had
a
new
build
yesterday.
So
I
didn't
really
have
a
time
to
go
through
this
one,
I'm,
not
sure
if
it's
the
same
behavior
we're
still
working
on
that.
C
I
A
We're
already
doing,
gets
on
CR,
DS
and
stuff
like
that
right
yeah,
so
it
just
makes
it
much
easier
to
start
consuming
and
then
having
that
raw
data
to
make
decisions
in
the
operator
for
scheduling
or
placement
or
whatever
you
know,
I
don't
have
to
think
through
that
about
how
best
it
could
use
that
knowledge.
Because,
right
now
you
know
it's.
It's
sort
of
scheduling,
capabilities
aren't
all
that
advanced
anyways
to
begin
with,
but
this
provides
data
to
be
able
to
do.
You
know
more
interesting
resource
utilization
and
limiting
and
stuff
like
that.
A
So
in
the
mechanism
in
which
it's
surfacing,
it
seems
like
it's
a
you
know,
friendly
normalized
way
for
applications
internet
kubernetes
to
consume
it.
The
only
thing
I'd
be
kind
of
interested
in.
Is
that
like
how
this
scales
or
under
load
in
the
cluster,
like
all
these
data
points,
said
it's
keeping.
Does
that
start
causing
more
load
on
the
kubernetes
api
server
or
on
etsy
d?
That's
backing
it
or
you
know
how
a
pal
perf
and
the
cluster
gets
hit
by
this
monitoring.
A
I
So
our
other
prediction
engine
just
collects
like
logs
in
or
actually
yeah
it
would.
It
would
create
a
pugilist
I'm,
actually
not
sure
that
if
it's
on
at
CD
or
if
it's
on
just
like
on
an
external
type
of
resource
pools,
I
would
have
to
check
with
our
dev
team
but
yeah.
It
definitely
does
create
some
load
on
the
cluster.
A
Yeah
and
being
able
to
get
a
better
understanding
of
that
it
would
be.
It
would
be
interesting
as
well
and
except
I.
Don't
know
how
well
see
are
these
scale
because
you
know
in
under
the
covers
CR
DS
data
in
its
status
and
everything
is
all
gonna
get
to
push
down
to
ad
CD.
That's
the
persistent
store
for
all
of
that
stuff
and
I.
Don't
know
how
you
know,
as
these
objects
get
bigger
and
bigger
I,
don't
know
if
that
you
know
yet.
Cd
might
start
choking
or
yeah.
E
Have
to
be
in
the
Kuban
is
class
in
the
HD.
You
can
have
objects
which,
when
they
get
called,
you
can
generate
what
it's
output,
like
I,
think
with
API
with
the
I
figure.
Something
like
that,
for
example,
stash
uses
that
to
when
you
get
a
list
of
backups
in
kubernetes.
If
you
basically
go
in
the
back
and
ask
you
backup
server,
hey
what
backups
are
available
so
that
this
could
be
a
possibility.
Is.
E
B
E
Okay,
just
as
a
quick
comment
for
the
limits
there
I
think
there
is
I
can't
say
you
didn't
name
right
now
off
the
file,
but
there
is
a
file
in
communities
which
contains
I
think
most
constraints
of
the
kubernetes
api,
like
how
long
our
labels
allowed
to
be
topic.
Our
annotations
from
size
are
allowed
to
be
and
stuff
like
that.
F
More
question
is
that
so,
as
this
there's
nothing
specific
here
around
like
monitoring,
storage,
pods
or
like
work
specific
pods,
it's
for
any
pods
you
might
might
have
so
the
rook
has
you
know.
Other
backends
have
been
related
to
storage.
So
so
how
is
a
specific
to
storage?
Or
is
it
it's
a
great
CRT
application
I'm
just
trying
to
make
the
connection
with
with
specifically.
C
I
We
yeah
it's
that
it's
definitely
not
tied
to
just
storage,
but
we
see
a
big
use
case
with
storage
because
I
don't
know.
If
you
remember
last
time,
I
asked
the
question
like:
how
do
you
guys
ensure
the
resources
don't
conflict
with
each
other,
because
when
stuff
it
rebounds
days,
it
uses
a
ton,
more
resources
at
normal
and
then
Cassandra's
really
resource
heavy.
So
this
is
a
wait
so
like
operate,
the
storage
pods
like
economically,
you
know
right.
H
So
so
this
is
cool
stuff
actually
and
in
general
prediction
of
any
sources
cool
stuff,
and
it
has
to
be
there.
In
my
opinion,
my
question
is:
do
you
guys
have
like
some
prototypes
or
thinking
about?
Can
I
go
closer
to
prediction
of
some
user
metrics,
such
as,
for
instance,
because
the
ruk
is
essentially
all
about
storage
like,
for
instance,
based
on
a
user
usage
of
the
storage?
Let's
say
you're
going
to
be
out
of
capacity
in
that
many
months,
or
something
like
that.
Oh
yeah.
I
So
we
were
already
a
plug-in
on
like
the
Ceph
master
branch
and
we're
going
to
be
released
in
the
Nautilus
on
release
and
they
have
capacity
prediction
for
the
stuff
cluster.
So
it's
architecture
engine.
Basically,
as
long
as
you
can
give
us
like
a
pattern
and
feed
us
metrics
will
be
able
to
give
you
guys
an
output.
So
it's
not
tied
to
just
like
CPU
memory.
So.
J
J
The
time
interval
actually
limit,
it
is
50
minutes.
The
exit
is
configurable
so
that
you
can,
we
can
journalist,
shorten
a
long
term
repetition.
So
the
shortened
position
is
the
Monica
and
that's
normal.
You
can
adjust
the
resources
to
meet
the
short-run
shortened
tasks
for
long
terms
of
position,
that
is,
for
users
to
surrender,
hey
the
for
the
rich
user,
so
the
futures
that
are
in
charge
of
training,
cool.
A
H
Okay,
so,
basically
ready
for
integration
into
0.9
the
completed
documentation
completed
all
the
CRTs,
all
the
necessary
integration
frameworks.
Now
passing
me,
we
do
appreciate
some
additional
feedback.
So
if
you
have
some,
if
you
have
a
few
minutes
to
take
a
look-
and
they
like
last
minute-
changes
to
be
very
happy
to
do
like
hey
change,
this
change
that
this
grammar
issue
so
on
highly
appreciate
that
sort
of
things
so
in
this
release
is
going
to
be
alpha.
H
Operators
operator
and
officer
DS
and
we
planning
to
migrate
to
beta
in
1.0
time
frame,
where
we
also
kind
of
would
like
to
kind
of
train
your
new
idea
of
adding
multi
back-end
support
for
the
roof
itself,
so
I
think
the
chef
guys
also
will
appreciate
this
work.
Where,
essentially,
we
can
you
know
and
not
just
do
the
host
Network,
but
actually
define
the
specialized
interface
to
run
the
backend
IO
and
in
the
front
end,
which
is
especially
important
for
isolation
between
the
multi-tenancy
use
cases.
H
So
so
that
is
also
kind
of
our
agenda
for
1.0.
But
what
is
currently
not
working
in
this
release
is
scale
up
and
scale
down,
because
we're
using
the
state
was
set,
so
we
have
to
actually
do
some
hooks
into
operators.
This
is
not
yet
fully
done,
but
I
think
our
first
integration.
What
we
have
is
pretty
good.
It
gives
nice
backplane
storage
but
plane
essentially,
which
provides
s3
extended,
s3
protocol
and
a
fast
skeleton
affairs.
H
G
A
A
Published
yet,
but
share
with
the
publisher,
wouldn't
happen:
I,
don't
we
don't
do
that
on
pull
requests,
I
think
it
wouldn't
happen
until
master
like
and
I'll
have
to
go
and
create
the
new
repositories
for
it
on
docker
hub
and
such
so
normally.
What
we
do
is
just
before
it
gets
merged
to
switch
out
in
the
ammos
to
Brooke,
slash,
edgy,
fessor,
Brook,
slash
tech
center,
whatever
whatever
it's
supposed
to
be,
and
just
do
that
before
right
before
me,
merge
pretty
much
okay,.
H
A
H
A
E
E
Well,
I
basically
edited
to
the
community
meeting
agenda
just
to
make
sure
that
we're
all
on
the
same
page,
just
right
now,
as
Travis
and
I
kind
of
said
earlier.
There
is
an
issue
in
it,
but
well
it's
it
isn't
the
issue
as
far
as
I
understand
in
my
pull
request,
but
it
is
an
issue
in
the
curve
master
right,
firmus
well,.
E
E
E
E
The
end,
at
least
I
would
like
to
see
that
well.
Well.
From
my
perspective,
we
have
identified
the
issue
and
Travis
is
going
to
at
the
missing
cod2
D
houndred
in
his
his
for
request
or
yeah,
or
it's
about
how
we
should
continue
as
right
now,
I
think
it's
I
think
you
can
do
the
last
sweep
Travis
I've
implemented
most
of
your
feedback,
so.
C
E
F
C
C
E
A
F
C
F
Basically,
1.8
and
1.9
are
not
gonna
be
supported
anymore
because
they
aren't
backing
the
fix
right.
So
I
I
guess
I'd
like
to
propose
that
we
don't
just
immediately
remove
support
for
those
like.
If
people
need
time
to
transition,
then
they're
gonna
need
time
to
transition
and
there's
nothing
that
really
prevents
us
from
running
there.
D
E
I
think
one
problem
would
be
that
if
we
only
includes
drop,
1.8
and
1.9
for
the
upcoming,
what
not
0
as
far
as
I
master
granion
I
think
that
the
1.10
or
something
would
also
not
be
released
as
very
well.
If
one
of
the
issues
that
accompany
aside
I
think
I
read
something
like
there
are
soup
their
support
for
the
last
three
versions,
unless
it's
the
long-term
supports
release,
so
I
think
that
well
in
case
of
a
suggests
right
now,
1.13
has
been
released
yesterday.
E
Yesterday,
there's
well
basically
as
well
as
I
understand
no
support
anymore
for
one
in
age,
1.9
and
well.
One
no
ten
is
still
supported
as
far
as
I
understand.
Right
now,
so
I
think
robbing
just
one.
Not
eight
and
109
is
problematic
in
point
of
well
only
one
hand
we
have
up
you
shift
to
support,
on
the
other
hand,
that
there
will
be
probably
at
least
1.14
when
we
make
a
knock
release.
I
would
say
right
now,
so
that
can
be
a
potential
problem
again
for
the
future.
If
there
should
be
a
security
issue,
I.
E
F
E
E
That
should
be
done
simply
because
well,
extensions
is
well
I,
think
they
they
will
be
automatically
translated
if
we
create
a
extensions
one
of
extension
subject,
but
as
well
as
far
as
I
know
the
extension
/
deployment
or
where
they
start
deployments
and
stuff
like
that
in
the
epsilon
API,
not
in
the
epsilon.
Sorry
in
the
extensions
API
will
be
removed
at
some
point,
but
also
in
general.
If
the
only
support
in
versions
which,
where
which
only
in
quotes,
have
the
version
1
API,
we
should
also
use
only
X
version.
1.
F
E
F
A
A
E
A
E
E
E
C
E
C
A
Cncs,
like
they
partnered
with
these,
these,
like
merchandising
folks
that
are
making
all
this
swag
they
make
it
for
you
know
all
the
CN
CF
projects,
so
they're
they're
wreaking
you
know
new
Brooke,
sweat,
Red,
Sox,
t-shirts,
I,
think
I,
don't
know
if
they're
doing
other
stuff
to
like
you.
A
A
Alrighty
then
we
will
go
ahead
and
in
the
meeting
now-
and
we've
got
a
lot
to
wrap
up
this
week
for
with
the
trying
to
get
0.9
out
for
cube
comm
next
week
in
Seattle.
So
it's
all
stay,
stay
close
contact
and
try
to
get
those
PRS
done
and
feedback
and
incorporated
quickly
and
try
to
wrap
up
as
much
as
we
can
yeah
thanks.