►
From YouTube: 2018-03-12 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right,
can
you
see
my
agenda
doc
shared
here,
yeah,
all
right,
okay,
so
let's
go
ahead
and
get
started
with
the
items
here.
So
the
first
one
I
wanted
to
discuss
today
was
that
we
got
to
the
0.7
dot
one
release
out.
Finally,
we
had
a
number
of
issues
to
get
that
release
out,
because
we
had
to
migrate.
Our
Jenkins
serve
our
continuous
integration
environments
and
our
release.
A
Phillip
build
server
to
a
new
environment
which
delayed
the
release,
I'm
wondering
if,
since
we
do
plan
on
migrating
to
being
hosted
by
the
cloud
native
computing
foundation
at
some
point
in
the
future,
I'm
wondering
if
it
would
be
useful
right
now
to
capture
some
of
the
obstacles
that
we
ran
into
during
this
migration,
while
it's
still
fresh
in
our
mind
and
then
we'll
have
that
as
a
resource.
The
next
time
remember
trying
to
do
this
migration
coming
up
to
the
CN
CF
bassam.
B
Know
honestly,
I,
don't
I,
don't
know
if
it
does
a
lot
of
the
issues,
we're
just
we're
trying
to
copy
the
environment.
That
was
that
was
there,
and
there
was
just
a
lot
of
you
know
things
that
didn't
copy.
What
copy
well,
like
instance,
sizes
and
instance,
types,
and
it
just
it
caused
failures
that
weren't
expected
a
lot.
B
B
I
think
I
think
the
next
step
is,
we
show
up
to
the
working
group
and
bring
an
agenda
item
a
start,
integrating
work
and
have
a
start.
The
design
discussion
there
on
that
I,
don't
know
if
they're
running
Jenkins
or
not,
but
it
kind
of
doesn't
matter
and
I
honestly,
don't
know
if
we're
gonna
use
them
for
say
just
for
just
the
ICD
or
if
it's
really,
we
use
them
for
integration,
testing
and
large-scale
testing.
B
A
That's
awesome
which
they
probably
get
a
fairly
besides
being
large-scale.
They
probably
get
a
really
interesting
representative
deployment
for
what
a
lot
of
you
know
with
the
cloud
native
landscape
with
all
the
projects
integrating
together.
That's
really
awesome.
I
did
not
know
that
thought.
I
thought
it
would
still
be
isolation,
testing,
awesome,
okay,
cool,
so
we'll
follow
up
on
that
then,
and
there's
no
real
need
to
I.
Think,
as
you
mentioned
document
this,
the
obstacles
we
ran
into
on
this
time,
since
it
might
not
have
much
applicability
going
forward.
A
Okay
next
issue,
I
wanted
to
talk
about
was
with
0.7.
We
still
we
got
a
release
out
and
we
fixed
a
bunch
of
issue
a
handful
of
issues,
but
I
wanted
to
look
and
see,
what's
still
lingering
in
the
0.7
time
frame,
for
that
we
may
want
to
still
address
so
this
one
here.
You
know
some
if
a
couple
of
folks
have
been
running
into
issues
where,
when
they
have
a
plication
pod
that
is
consuming,
you
know,
Brooke
box
storage,
that
is
provisioned
and
mounted
by
The
Rock
agent.
A
There
seems
to
be
a
difficult
to
reproduce,
but
the
definitely
happening
case
where
the
volume
will
be
formatted
when
it's
during
a
failover
scenario,
which
is
obviously
not
a
good
issue
at
all.
So
we've
kind
of
spent
some
effort
and
some
cycles
on
this
trying
to
understand
what
the
you
know
scenario
is
to
reproduce
this
reliably
and
I.
Don't
think
we
have
a
lot
of
insight
here
so
I
think.
A
B
A
A
The
helm
job
fails,
we've
seen
a
couple
instances
of
this,
and
but
it's
not
very
widespread
because
you
do
need
to
have
an
invalids
semantics
version
set
for
kubernetes
for
this
to
reproduce
but
I'd
like
to
keep
tracking
this
as
a
nice-to-have.
I
did
not
put
it
in
the
milestone,
but
I
put
it
on
the
board
here
for
0.7.
So
if
we
do
another
minor
release,
we
could
pick
up
this
fix
as
well.
A
The
fix
is
not
currently
defined
yet
for
exactly
how
to
handle
that
so
Tim
Jones
I,
believe
is
going
to
be
following
up
on
that
for
us.
Okay,
great
all
right,
so
that's
that's
everything
in
0.7
and
that
0.8
I
don't
know
if
we
could
take
a
look
at
the
project
board
real
quick,
but
it
would
be
nice
to
have
seen
some
more
of
the
major
contributors
here
which
we're
missing
Travis
and
we're
missing.
Alex
today.
So
I
think
that
there's
not
going
to
be
a
ton.
A
That
we're
gonna
get
out
of
this,
but
perhaps
highlighting
some
of
you
know
what's
in
progress,
will
be
useful.
So
the
first
item
here
in
progress
is,
you
know
on
my
plate
and
I've
been
working
on
that.
For
you
know,
this
is
the
design
for
how
we're
going
to
properly
enable
more
storage
backends
since
that's
very
applicable
to
you,
know
the
next
sense
of
guys
that
are
on
the
phone
right
now
as
well.
B
B
A
B
A
Okay,
yeah,
it's
it's
assigned
to
me
and
it's
on
this
board
now
so
we're
tracking
it
at
least
and
we'll
get
to
it.
You
know
when
the
other
higher
priority
things
have
been
addressed,
I
think
Travis
is
getting
back
to
the
soon
Ford.
You
know
removing
the
deprecated
API
server
and
command-line
tool.
He
is
a
good
candidate
for
that,
since
he
knows
all
the
changes
that
need
to
be
made
to
our
integration
tests
that
are
relying
on
it
right
now,
so
I
think
that's
in
good
hands
and
it
looks
like
rue
des
fests.
A
A
I
looked
at
it
really
quickly
and
then
Travis
got
some
comments
on
it
before
I
could,
but
the
design
was
the
overall
assessment
of
it.
For
me
was
that
it
was
not
very
well
fleshed
out.
It
was
very
terse
and
cursory,
so
I'd
like
to
see
some
more
some
more
thought
put
into
that,
especially
if
it's
a
long,
you
know
design
document
that
should
serve
as
kind
of
documentation
for
how
the
architecture
works.
So
I
don't
think
it's
quite
to
the
level
that
we
would
want
before
we
merge
it.
Honestly.
B
A
The
road
map
updated
in
master
now
that
merged
last
week,
so
we
have
a
more
clear
picture
of
what
we
want
to
accomplish
in
this
milestone.
The
next
milestone
and
I
think
there
may
be
some
tickets
that
have
not
been
open
for
some
of
those
items,
but
the
road
map
has
been
defined,
yeah
I
think
yeah.
This
is
pretty
much
all
all
the
items
I'm
seeing
here
pretty
much
matched
the
Bur
road
map
very
well.
Okay,.
A
Right
here,
I
just
had
a
couple
of
PRS
that
I
wanted
to
talk
about,
but
Alex
isn't
here
so
we'll
skip
that
one,
and
then
this
is
just
gonna
be
a
question
to
the
community.
There
is
a
pull
rope,
so
this
will
go
very
quickly
and
we'll
get
to
the
next
Center
guys
pretty
much
right
next,
just
for
request
here
that
a
contributor,
pier
I
believe
it
was
added
a
helm
chart
for
the
boat
cluster
type
and
I
have
not
personally
gotten
a
chance
to
look
at
it.
B
A
Right
do
do
so.
Do
you
think
that
I
see
a
distinction
between
you
know
the
the
usefulness
of
having
a
cluster
chart
versus
you
know
do
implementing
all
the
other
ones.
I
think
that
there's
still
value
in
having
a
clustered
chart
before
we,
you
know
take
on
the
one-to-one
chart
to
type
mapping,
commitment
and
the.
A
B
Yeah
my
biggest
hesitation
is
that
if
we
wanted
to
push
our
help
chart
up
to
the
stable
repo
for
home,
then
creating
a
small
chart
that
just
installed
the
operator
gives
us
a
lot
of
leeway
in
terms
of
versioning
types
and
all
of
us
stuff.
If
we
now
have
to
you
know,
push
things
like
cluster
and
other
types
into
some
other
repo
upstream,
then
it
becomes
a
really
hard
thing
to
go
version
every
time.
B
A
A
Okay,
that
in
that
comment,
sounds
useful
all
right,
so
we
can
move
on
to
the
community
topics
here
and
then
Travis
is
not
here
oops
for
his
request
to
move
the
meetings,
so
I
think
that
he's
made
that
request,
known
I
believe
and
he
wanted
to
know
your
opinion
on
it
is
well
Bassam
about.
If
Tuesday
would
work
for
you
or
the
right.
You
know
anyone.
The
greater
community.
A
A
Okay,
yeah
I
wanted
to
get
a.
You
know,
understanding
from
the
rest
of
the
maintainers,
and
so
it
sounds
like
we
can
do
it
and
the
community
and
we'll
need
to
weigh
in
as
well.
Okay,
so
I
think
we're
now
we're
ready
to
go
ahead
and
start
talking
with
Dimitri
and
his
team
here
about
next
enca.
So
Dimitri,
do
you
want?
Do
you
need
to
share
your
screen
or
any
resources?
Or
do
you
just
want
to
start
talking
through?
A
D
A
D
D
D
Object
block
and
file
as
object
is:
it
can
be
essentially
s3
or
sweep
it's
a
very
high
performance,
low
latency
like,
for
instance,
we
have
support
for
extended
s3
primitives,
where
you
can
get
access
at
the
lower
than
reality
than
four
megabytes
or
five
megabytes.
What
estate
dictate
and
every
all
all
modifications
to
the
object
is
all
the
origin.
We
also
provide
ecology
with
built
in
a
checkup
ability
means
it's
a
master
slave
configuration,
so
it
automatically
fails
over
William
ten
to
twenty
seconds
intervals.
C
D
So
the
NFS
is
indeed
over
object,
so
every
file
which
is
created
in
an
offensive
based
basically
visible
as
an
object
in
wise
yours.
So
it's
fully
transparent
out
of
the
functionality
like
a
top
level
high
level
functionality
so
to
speak,
Nix
and
edge
provides
interestingly
global
deduplication.
It's
not
at
the
level
of
OSD,
it
is
actually
global
means.
D
We
have
global
indexes,
spread
out
across
the
cluster
and
what's
interesting
about
our
particular
design.
Is
that,
typically,
if
you
have
the
duplication,
there
is
a
performance
impact
on
an
index
book
up
in
our
case,
because
we
do
not
have
method,
Sergi
significance
of
metadata
servers,
the
more
connect
discs
and,
however,
you
ask
to
the
cluster.
The
faster
indexes
will
work,
so
the
duplication
will
not
flow
down,
but
actually
will
speed
up.
Similarly,
with
compression
is
also
done
at
not
at
the
level
level
of
OSD,
but
at
the
higher
level
at
the
networking
level.
D
C
D
Degradation,
so
to
speak,
and
I
can
talk
a
little
bit
more
about
these
particular
questions
or
interests.
Yes,
we
do
have
some
gooeys,
but
these
buoys
is
not
necessarily
which
we
planning
to
integrate
with
kubernetes.
We
want
people
native
to
essentially
drive
the
deployment
and
management,
but
on
itself
with
that,
rokka
rokka
engine
essentially
has
to
have
full
power
over
deployment
of
the
engine
to
H
container.
D
Yes,
we
do
support
everything
in
a
container
and
it
can
run
as
a
container
and
on
burn
metal
as
well.
So
with
that
I
wanted
to
go
quick
and
give
you
a
quick
introduction
on
immutable
data
and
metadata.
This
is
the
technology
which
we're
using
essentially
I,
think
of
it
as
our
immutable
meter,
data
of
self
opening
locations
and
metadata
referencing
immutable
cell
for
the
date
building
invocation
dependent
payload.
This
is
there,
if
you
think
about
any
object.
D
This
object
is,
gets
cash
after
their
magnification
will
be
versions
and
meet
in
what
would
you
what
you
see
as
a
result
is
that
whenever
we
want
to
do
new
modification,
you
create
new
version
and
that
logic
is
essentially
hidden
from
from
the
rider
your
pass
now
under
under
the
hood.
We
actually
will
have
multiple
versions
going
on
for
the
same
object
modification.
This
gives
us
a
few
benefits.
One
of
them
is
first,
we
on
when
we
doing
modification
may
not
necessarily
need
to
store
the
payload
chunk
location.
D
So
we
are
only
storing
justification,
and
that
gives
us
flexibility,
because,
with
that,
we
can
find
without
a
manifest
in
any
allocation
based
on
where
the
country
happening,
and
we
have
different
technology,
which
help
is
helping
us
with
this
on
the
next
slide.
I
gonna
talk
a
little
bit
about
that.
D
The
other
benefit
is
that
because
immutable,
because,
because
of
the
mutability
and
metadata
meant
ability
in
particular,
whatever
their
background,
operations
needs
to
be
done,
like,
for
instance,
replication,
like
friends
of
self-healing
or
adding
removing
a
node
from
the
cluster
or
failed
disk,
all
is
going
to
happen.
Such
sort
of
metadata
doesn't
need
to
be
modified,
so
this
is
a
huge
benefit.
For
instance,
we
don't
have
any
changes
or
any
epigram
scrubbers
which
is
sensually.
You
know,
modifying
metadata
metadata
is
always
immutable
and
gives
us
a
great
end
to
end
data
integrity,
hey.
D
B
C
D
D
D
Well
exactly
so
because,
yes,
everything
is
immutable,
it
becomes
kind
of
easier
because
you
know
exactly
you
know
whether
the
other
day
that
was
corrupted
not
and
the
get
going
analogy
level
rather
than
you
know,
coordination
with
me,
the
data
source.
So
this
way
each
disk
in
itself
becomes
a
metadata
server,
so
to
speak,
and
you.
D
D
Then
the
this
also
comes
a
great
different
benefits
like,
for
instance,
if
you
think
about
dynamic
with
resources
utilization,
because
now
we
don't
have
a
physical
location
built
in
into
the
metadata
structured.
So
and
it's
fully
mutable.
In
this
case
we
can
place
mental
data
in
data
trail
or
chunks
into
variable
locations.
Most
you
know,
can
be
selected
during
the
actual
transfer.
What
I
was
whether
allocation
can
be
like,
for
instance,
it
can
be
decided
based
on
the
latency.
We
can
select,
for
instance,
a
disk
with
less
latency.
D
C
D
That
roll
disc
is
where
this
chunk
they
actual
index,
will
be
pointing
to
the
virtual
manifest
and
the
manifest
is
a
root
of
the
object
in
Gannicus
store.
So
out
of
this
group
group
94
in
this
particular
case,
we
can
select
three
disks,
but
can
be
any
disk
in
that
group.
Typically
those
groups
in
between
and
24
disks.
So
it's
a
it's
a
good
variety
of
the
disks
available
for
the
placement
and
there
are
four
we
can
select
very
efficiently.
You
know
three
weeks
out,
24.
C
C
D
B
B
Do
I'm,
assuming
your
metadata
and
data
commingled
everywhere,
yeah.
C
D
B
C
D
D
Me
clarify
so
please
believe
me
today
that
relation
will
come
like,
for
instance,
in
like
in
a
group
in
this
particular
picture
right.
So,
if
you
want
identify
which
drive
want
to
except
this
particular
you
in
this
case,
multicast
is
greatly
beneficial.
Why?
Because,
in
order
to
17%
request
to
all
24
discs,
you
sending
only
one
and
then
the
discs
you
receive
as
a
result
of
the
martica
we're.
D
D
So,
in
that
case,
we
just
simply
a
fallback
to
the
UDP
messaging.
That's
all
okay,
so
and
I
also
wanted
to
talk
a
little
bit
about
the
benefits
of
containers
and
data
locality,
and
particularly,
we
have
some
design
which
is
is
currently
not
in
a
project
but
in
in
an
excellent
to
point
to
be
working
on
this,
and
it's
called
essentially
data,
locality
and
localized
access
point
and
by
the
way,
this
picture
on
the
right.
It
kind
of
gives
you
a
sandwich
of
what
was
going
on
inside
this,
our
storage
solution.
D
So
you
can
see
kind
of
in
one
picture
all
at
once,
but,
like
I
said,
the
point
can
be
very
interesting
specifically
for
containing
converge
deployments,
because
both
writes
in
recent
operations
can
be
configured
such
that
they
all
local
August.
But
at
the
same
time
it
is
still
has
3d.
Application
still
can
be
accessed
from
the
different
servers
a
little
bit
slower
than
usual,
but
it
can
be
access.
D
So
so
will
that
like,
for
instance,
because
we
have
that
negotiation
mechanism
built
in,
we
can,
for
instance,
say
ok,
we
cannot
store
one
of
the
replicas
on
a
local
location
and
then
have
reject
rejection.
Stop
which
points
us
to
the
different
location.
Things
like
references
cannot
be
done
easily
with
chef,
because
it
it's
not
like
grating
with
immutable,
metadata
and
data.
Therefore,
it's
very
difficult
to
redirect
like
this,
but
in
our
design
it's
fairly
zmb
working
on
this
solution.
D
We
should,
in
my
opinion,
should
give
great
boost
performance
in
hyper-converged
antenna
conversion
scenarios,
and
so
there
is
also
been
some
benefits
to
the
region.
Coding
and
I
just
wanted
to
give
quickly
so
that
I
can
complete
this
presentation.
What
we
we've
done,
essentially,
is
an
analysis
of
how
it
is
done
traditionally
and
traditionally
there
is
an
Park,
typically
impact
on
write
and
read
performance.
D
D
Here
so
what
quedan
is
saying
like,
for
instance,
remember
that
group
right,
so
you
have
three
disks
which
help
these
chunks
originally.
So,
if
you
have
three
replicas
then
will
be
requesting
the
chunk.
We
have
a
chance
to
basically
select
the
fastest
out
of
three,
but
in
case
of
when
we
have
only
one
chunk
representing
this
I
have.
D
C
C
D
D
/
cost
option
called
hybrid
where
you
have
veggies
and
SSDs.
In
that
case,
we
can
combine
as
a
VM
built
number
of
groups
which
will
offload
so
to
speak.
Sandwich
decoration,
we
also
support
row.
Disk
access
means
it's
actually
or
either
raw
disk
or
just
on
top
of
typical
file
system.
It's
optional,
so
we
can
select.
D
The
raw
disk
provides
obviously
more
help
of
the
file
system,
which
gives
a
different
benefit,
and
we
also
have
performance
option,
which
is
all
this
is
deep,
and
in
that
case
we
doing
memory,
mapping
of
the
entire
it's
geez
and
then
again
roadies
access
and
provide
high
performance,
low
overhead,
low
latency
and
to
the
device
all
configurable
configuration
typical
down
to
the
JSON
files
and
requires
a
star
the
container
and
I.
Guess,
that's
that's
it,
and
there
is.
There
is
another
one.
D
It's
probably
more
details
than
you
will
want
to
hear
right
now
and
I
just
quickly
just
go
through
this
and
just
point
to
this
table.
We
also
have
advanced
s3
mechanism,
which
is
more
designed
for
machine
learning
and
if
you
don't
tell
just
applications
where
you
need,
like
you
know,
very
fast
mode,
for
instance
of
data,
said
very
often
and
also
additional
versioning
capabilities.
So
we
need
some
analysis.
We
also
say
SS
s3
and
what
we're
providing
is.
D
There
are
great
benefits
to
over
the
SD
inertial
goal
by,
for
instance,
we
do
support
package
snapshots,
object
snapshots.
We
also
can
build
a
key
value
database
as
object
support
videos
arranged
right.
This
is
one
of
the
interesting
functionalities
so
like
like,
as
you
know,
in
case
of
st,
if
you
want
to
override
something
and
you
have
to
replace
the
object
or
do
some
hacker
with
multi-part
in
our
case,
we
provide
clean
range
right
mechanism.
We
can
actually
modify
and
I
still
do
be
building.
So
all
that
basically
can
excuse
my
flight
today.
D
D
B
C
D
C
D
Is
kinda
required
for
us,
so
we
be
kind
of
it's
very
important
to
the
design
to
have
a
goodbye
camp
network,
otherwise
so
that
because
we're
doing
UDP
and
it's
also
ipv6
protocol,
so
we
leveraging
just
appears
six
on
the
back
end
from
the
client.
Sampler
can
be
anything,
but
it's
very
important
because
your
back
end
is
flicky
or
or
not
fast
enough
and
the.
B
D
So
we
using
countries
primarily
for
our
networking,
can't
it
works
well
as
well
as
the
open
with
each.
If
you
do,
they
essentially
just
may
not
open
a
switch
configuration,
no
problem
whatever.
As
long
as
the
networking
provides
a
pv6
for
the
packin
Network,
and
there
should
not
be
any
problem
in
deployment
yeah.
A
D
C
D
C
C
D
D
We
partake
with
your
Europe.
We
particularly
do
not
want
to
go
into
the
minimalistic
small
deployments.
We
actually
more
interested
in
a
datacenter
high-performance
deployments
and
indeed
that
will
limit
usage.
But
it's
purposely
limit
usage.
We
because
we
don't
want
to
deploy
in
all
possible
configurations.
We
want
a
good
back
and
network
and
good
experience
at
the
end.
B
C
B
D
Here,
so
so,
yes,
yes,
definitely
consider
and
once,
but
but
at
the
same
time,
I
also
heard
the
kubernetes
guys
working
on
adding
full
appeal,
six
support
so
to
some
degree.
At
some
point
we
will
have
both
right.
If
you
work
on
this
additional
functionality,
we
can
have
support
for
PU
for
as
well,
but
at
the
moment,
is
just
ipv6
so
like,
for
instance,
if,
if
you're
going
to
be
using
this
this
year,
it's
most
likely
going
to
be
just
a
p
v6
and
for
usually
you'll
be
requiring
a
particular
packet
to
execute
upon.
D
C
D
C
D
D
Yes,
so
we
have
few
customers
we're
basically
running
specifically
for
docker
and.
D
C
D
C
B
B
D
The
the
thing
is
you
right:
yeah
I
mean
I
totally
agree.
This
is
indeed
limiting,
but
in
the
moment
we
just
made
a
decision
just
to
lean
it
essentially
exporter
into
all
the
small
deployments,
because.
C
C
D
B
B
So
if
you
wanted
performance
out
of
self,
you
probably
need
to
create
a
separate
back-end
Network
between
iOS
T's,
for
example,
yep
yep
and
so
and
so,
but
as
it
currently
stands,
you
know
you
could
deploy
self
without
it
and
it
still
works
fine.
But
if
you
want
it,
you
know
you
have
to
do
work
to
get
performance
so
that
that's
a
nice
property
I
think.
B
E
B
Best
to
talk
about
next
steps
in
terms
of
design,
I
think
the
would
be
great
if
you
guys
and
us
can
work
on
a
like
a
design
doc
the
rounds.
What
how
to
get
this
up
and
running
so
so
Jared
Jared's
working
on
this
back-end
design
for
our
CRD
objects,
and
all
of
that
would
be
great
to
see
what
the
CR
DS
looked
like
for
an
ex
enter
in
a
design
doc
and
talk
about
all
the
issues
we
here
who
Demitri,
who
in
your
side,
wants
to
lead
that
work
right.
D
Well,
so
it's
gonna
be
Caitlyn.
We
we
definitely
want
to
participate
in
this
design.
Obviously,
as
you
see,
there
are
requirements
here
and
there
at
the
moment
of
the
product
II.
The
other
thing
is
I
wanted
to
suggest
phased
approach.
So
we
support
everything
like
the
support
file.
Algorithm
block
as
well
being,
is
to
get
going,
maybe
40.8
milestone.
If
you
get
there
would
be
object.
So
if
you
just
stop
this
or
look
and
basically
just
say
file
and
both
will
come
later,
I
think
the.
C
C
The
key
thing
under
that
we
have
to
figure
out
how
to
address
in
the
long
run
is
that
our
file
is
built
on
top
of
object
right,
which
is
different
than
the
file
that
you
have
in
your
current
model
and
just
think
we're
interviewing
a
new
type
of
file
system.
Or
what-
and
you
know
we
have
to
explore
our
options.
There.
B
Sounds
good:
let's,
let's
take
this
all
to
a
design
dog,
maybe
maybe
the
next
step
would
be
to
start
creating
a
Google
Doc
that
has
some
of
this
stuff
in
it
and
then
start
defining
these
types,
maybe
starting
with
the
object
store
and
then
let's,
let's
take,
let's
kind
of
it
right
there
does
that
make
sense.
Jerry
does
that
yeah.
A
D
A
Yes,
yeah
I
think
the
two
big
things
are,
you
know
defining
what
a
developer
would
need
to
do
specifically
to
you
know,
add
another
back
into
rook.
You
know
which
is
on
my
plate
and
that'll,
be
there
and
then
having
a
better
understanding
of
the
deployment
requirements
for
next
census.
You
know
like
what
components
need
to
be
deployed.
You
know:
where
do
they
need
to
be
running?
How
do
they
need
to
be
managed?
You
know
all
that
sort
of
stuff
having
a
better
understanding
of
that
will
complete
this
picture
for
us.
I
think.
A
A
C
D
A
E
Guess
kind
of
the
Mon
stateful
set
thing
where
I
kind
of
got
to
the
point
where
it
would
work
if
we
would
terminate
the
Mons
gracefully
so
I'm,
currently
looking
at
a
code,
so
I'm
still
yeah
looking
to
see
how
simple
it
would
be
to
implement
debts,
every
component
would
be
gracefully
terminated
and
not
just
right
now.
Instantly
terminated
now.
E
Of
so
I
thought
that
I
can
just
add
after
the
running
of
the
cellphone
command,
just
at
a
hey
remove
this
one
and
we
are
fine
and
then
maybe
add
a
check
to
the
operator.
So
we
just
to
get
it
running
with
that,
but
yeah
well,
I
can't
run
into
that.
It's
not
working
like
that,
because
there
is
no
signal
landing
happening
at
least
in
a
month.
Yeah.
E
I
think
we
need
to
do
something
like
in
the
operator,
as
we
already
have
some
I
think
it
was
the
operator
where
we
already
have
the
latest
pull
request
that
went
through
by
I
think
it
was
Ilya
or
what
her
name
or
his
name
is
debts,
simply
added
a
stop
channel
to
everything.
So
it's
I
think
in
that
case,
more
of
a
question
how
simple
it
is
to
give
the
executors
to
stop
channel,
because
if
the
executors
and
the
over
record
a
code
would
have
this
stop
channel
and
I
can
react
on
it.
A
E
When
the
monitor
is
moved
to
another
note
at
least
done
so
idea,
either
I
have
to
watch
on
the
pots
on
the
Mon
pots,
as
I'm
kind
of
already
dealing
with
updating
the
IP
addresses
and
then
just
friggin
SF
remove
at
a
mana
remove
command
alright,
but
I
would
prefer
to
do
it
in
a
monitor
and
as
I
said.
Overall,
adding
graceful
determination
to
the
components
would
be
a
huge
thing
to
do,
especially
in
the
executor
overall,
because
we
I
think
we
use
to
execute
everywhere
where
we
run
a
component.
This
would
be
good.
Okay,.
E
A
A
The
agenda
doc
will
get
updated
with
some
more
comments
we
talked
about
in
the
recording
will
be
up
to,
but
I,
don't
think
you
missed
anything
huge
yeah
I,
don't
think
so.
We
spent
a
good
amount
of
time
today
talking
about
integrating
an
extensive
and
what
that
would
think
and
so
that'll
all
be
on
the
recording
as
well.
Nice.