►
From YouTube: 2019-06-04 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
has
started-
and
this
is
the
June
4th
2019
Brook
community
meeting,
and
so
we
will
start
this
meeting
here
with
a
quick
checkup
on
our
recent
milestones.
1.0
is
out
like
a
month
ago
and
we'll
see
if
we
have
any
patch
work
to
do
for
it
any
pending
patches
and
then
the
we'll
take
a
look
at
the
current
milestone,
which
is
1.1
all
right.
So
here
is
the
project
board
for
1.0
patch
releases,
so
I
know
a
number
of
these
are
in
investigation
Travis.
B
Definitely
I
was
just
going
through
this
board
this
morning
added
a
few
that
I
think
should
be
back
ported
that
weren't
in
the
list
yet,
but
basically
we
we
do
have
a
several
there's.
Several
of
these
poor
quests
that
are
interview
and
getting
close
to
done
I
think
would
be
good
to
get
them
in.
So
the
really
not
everything
in
the
review,
column
I
think
needs
to
be
back.
Ported
I
think
the
see
the
first
one
will
the
dock
resource
limit
a
simple
thing:
the
second
one,
let's
see
admission,
webhooks
yeah.
A
A
B
There
we
go
and
then
the
one,
the
second
one
there
in
the
list
assigned
to
Blaine
I'm,
not
sure
that
one
will
make
it
either
since
Blaine's
traveling
this
week,
but
the
rest
of
them
I
believe
we're
I'd
expect
to
get
in
and
the
next
couple
of
days,
hopefully-
and
my
in
my
mind,
the
the
goal
would
be
end
of
this
week
to
get
a
1.2
release
out
with
some
of
these
fixes
and
then
with
the
other
ones
in
investigation.
I'm
sure
we'll
need
a
100
at
3
at
some
point.
After
that,.
A
B
A
B
B
A
B
A
All
right,
so,
when
1.1,
you
know,
this
is
the
project
board
for
that.
Do
you
feel
that
the
from
the
roadmap,
at
least
that
the
all
be
that
the
applicable
items
have
been
added
to
the
1.1
project,
or
we
have
some
more
work
to
go
there
I
have
a
feeling.
We
have
some
more
work
to
capture
everything
in
this
milestone.
Yeah.
B
A
B
Work
is
starting
on
1.1
for
sure
yeah,
it's
in
progress,
it's
just
thrashing
on
the
board
hasn't
been.
We
haven't
done
due
diligence
they
and
we
need
to
update
the
roadmap
for
one
dot
want
to
edit
that
later
and
they
Genda
just
as
a
general
thing,
when
you
do
I'd
like
to
plan
on
updating
at
that
in
the
next
couple
of
days.
Okay,.
A
All
right,
so,
let's
then
move
ahead,
since
the
main
focus
right
now
is
on
patches
41.0.
Some
development
work
has
started
on
1.1,
but
we
need
to
pay
more,
spend
some
time
focusing
on
getting
the
items
and
issues
updated
for
the
roadmap.
Then
we
can
discuss
a
little
bit
later.
Here
seems
like
me,
then
we
can
go
ahead
and
move
on
to
the
community
topic
session.
Section
sounds
good
to
me
right.
So
the
first
item
we
had
is
that
the
last
community
meeting
we
skipped,
because
that
was
right
in
the
middle
of
cube
con
Barcelona.
A
A
So,
thanks
to
everybody
who
stopped
by
the
booth
and
chatted
with
us
and
the
talks
as
well,
although
all
the
Brooke
talks
were
very
popular,
the
intro,
the
deep
dive,
the
global
scale
data
one,
all
those
talks
were
really
popular,
so
that
was
super
excited
to
see
exciting,
to
see
all
the
excitement
around
around
rook
in
Barcelona
Travis.
Did
you
add
this
need
more
swagger,
as
did
yes,
I
did
I.
A
We
these
stickers
and
teacher
that
we
had
went
incredibly
quickly.
People
were
very
excited
about
the
new
Brooke,
artwork
and
logos
and
stuff
that
was
done
by
Chris,
and
the
t-shirts
were
for
commemorating
version.
100
were
super
exciting
as
well.
I
think
that
we
having
a
better
we,
we
probably
should
have
reserves
or
saved
some
more
swag
from
for
the
main
cube
con
event
versus
cephalic
on.
We
set
up
shop
there,
but
the
booth
that
cephalic
on
first
and
things
were
going
pretty
hot
there.
A
A
Awesome
yeah,
that's
great
that
the
booth
looked
good.
The
the
you
know
the
discussions
there,
the
the
excitement
and
buzz
around
it
was
awesome.
I
think
everything-everything
was
really
really
really
good
at
cube
con.
Does
anybody
have
any
big
takeaways
or
observations
that
they
want
to
share
with
with
the
group
here?
Besides
the
general
observation
that
things
were
awesome,
one.
B
B
I
mean
we
had
some
Seth
pins
and
things,
but
other
yeah.
We
just
need
other
storage
providers,
the
more
storage
providers
we
have
representing
that
the
better
in
the
future.
So
just
for
those
driving
other
storage
providers
fully
welcome
and
and
invited
to
have
that
that
collection,
that's
right.
There
absolutely.
A
Yeah
battle,
I
haven't
seen
any
attendees
that
that's
a
less
swag
and
I
saw
I
still
laugh
to
this
day
about
our
early
rook
contributor,
Steve
Leone,
alias
kacang,
who
would
literally
bring
a
second
suitcase
to
each
cube
con
that
was
empty
when
he
went
there
so
that
he
could
bring
back
that
second
suitcase
full
of
swag.
That's
the
most
dedicated
swag
man
I've
ever
seen
in
my
entire
life
yeah
very.
B
B
All
right,
yeah,
the
next
one,
then
is
just
kind
of
a
general
simple
question.
In
slack
today,
most
people
use
the
general
channel
for
all
sorts
of
questions.
I
think
it
we're
getting
to
a
point
where
it'd
be
best
to
split
that
into
one
hour
new
channels
for
each
storage
provider.
Just
so
people
can
direct
their
questions
to
the
you
know,
a
place
where
the
answers
might
be
filtered
more
easily
for
the
right
people
to
answer.
So
unless
there's
any
other
suggestion,
I'll
go
ahead
and
create
one
for
each
of
the
storage
providers.
B
A
Yeah
I
think
that
that
makes
a
lot
of
sense
because
you
know
general,
the
general
Channel.
You
know
at
least
in
terms
of
any
intent
for
that
being
a
general
rook
discussion
at
a
place,
for
that
makes
a
lot
of
sense
to
have
storage,
rather
specific
stuff
focused
on
their
own
channels.
Where
the
you
know,
people
can
find
it
easily,
and
you
know
the
experts
in
those
particular
areas
will
be
able
to
help
most
easily
there.
That
makes
that
makes
a
lot
of
sense
to
me
and.
A
A
We
have
a
whole
bunch
of
question
and
answers
pairs
that
the
focal
bot
knows
about
to
be
able
to
automatically
answer
people's
questions
that
need
to
be
approved
or
sort
of
walks
through
to
with
a
human
to
make
sure
that
they
make
sense
that
they
are
appropriate
or
quality
answers.
I
haven't
looked
at
that
a
little
bit,
I
started
taking
it
past
that
at
the
beginning
and
there's
like
a
few
hundred
of
them,
so
it
was
kind
of
slow
moving.
B
A
B
Love
I
mean
I've
approved
a
lot
of
them.
Okay.
At
the
same
time,
it's
felt
like
it'd,
be
hard,
probably
in
the
future,
for
the
bot
to
answer
somebody
else's
question
based
on
those.
Maybe
so,
but
it's
everybody
comes
with
such
different
questions
in
general,
that
it's
a
challenge,
I'd
say
for
the
bot
to
be
super
helpful.
C
C
A
It
looks
like
right
now
that
the
percentage
quality
of
questions
is
is
on
the
it's
the
minority
occurrence
that
the
answer
is
helpful,
so
most
of
them
are
saying
that
the
answer
is
not
helpful
right
now,
which
does
continue
to
train
him.
But
that's
interesting,
though,
that
I
think
that
that
definitely
is
a
testament
to
you
know
the
all
the
the
backlog
of
unapproved
answers.
We
have
or
know
where
it
just
went,
but
you
know,
there's
I
think
we
still
have
a
hundred
different.
A
You
know
we
only
have
28
it
approved
answers,
and
we
have
you
know
a
few
hundred
of
potential
answers
that
the
focal
pot
can
know
about.
So
I
think
that
this
is
largely
the
quality
of
the
bot
could
definitely
increase.
If
we
took
the
time
to,
you
know,
approve
the
the
answers
and
get
that
get
that
guy
all
all
talked
and
understanding
the
needs
of
the
real
community.
So
that
might
be
a
good
action
item
to
follow
up
on
you.
A
D
There
are
two
of
us
from
the
Apache
team,
our
own
team
here
me
and
Martin.
So
very
briefly,
ozone
is
an
evolution
of
hdf.
It
is
basically
evolved
from
HDFS
and
it
is
optimized
for
big
data
workload
and
we
believe
that
there
is
a
lot
of
big
data
workloads
which
are
moving
over
to
kubernetes,
and
it
is
very
useful
for
them
to
have
a
file
system
that
is
capable
of
handling.
You
know
big
data
workloads
so
also
one
fits
in
perfectly
there
and.
D
Was
unplanned?
Oh
thank
you,
so
yeah
I
was
just
responding
to
a
comment
so
essentially
just
to
give
you
a
very
brief
overview
zone
looks
and
feels
like
an
I
mean
it's
an
object
store,
but
it
integrates
perfectly
into
the
Big
Data
ecosystem.
So
yarn
spark
hive
all
of
that
work
and
both
of
us
are
Apache,
Hadoop,
committers
and
TNC
members,
and
all
that
so
we've
been
working
on
this
for
the
last
three
years
and
lot
of
our
users
and
customers
are
now
moving
into
kubernetes.
D
We
have
been
playing
around
with
normal
installs
and
you
know
lifecycle
management,
and
so
we
would
like
to
support
this
formula
inside
route
so
that
the
the
experiences
are
very
consistent
with
normal
storage
systems.
So
that
is
briefly
what
specifically
provides
at
this
point
of
time.
Hadoop
compatible
file
system
API,
which
is
what
or
big
data
applications
use.
It's
not
really
a
file
system
in
this
classical
classic
sense,
but
it
is
more
like
a
file
system,
interface
driver,
400
applications
and
then
it
also
supports
s3
protocol
out
of
the
box.
D
E
D
You
just
commented
that
it
be
very
interested
in
a
configuration
example
for
k8s
in
a
spark
operator.
We
will
certainly
do
that.
We
in
our
piace
ozone
we
shipped
currently
with
kubernetes
configuration
examples
and
stuff
like
that.
We
do
not
have
spark
examples.
Mark
and
correct
me
if
I
am
wrong,
but
we.
A
So
to
me
this
sounds
really
cool.
You
know
having
the
ability
to
easily
deploy
and
manage
and
scale
etc.
You
know
do
a
lot
of
automation
around
those
operational
tasks.
For,
for
you
know
ozone,
a
same
storage
provider
sounds
from
the
knighting
I.
Think
that's
great
right
now,
when
you
run
in
kubernetes.
A
D
System
so
basically,
and
then
on
top
of
the
storage
manager,
we
have
a
namespace
manager,
which
we
call
the
ocean
manager.
Okay
and
then
what
we
have
is
a
bunch
of
data
nodes
where
we
actually
store
data
and
they
are
stateful
sets
or
you
know,
persistent
volumes
and
all
that.
So
basically,
that
is
the
first
layer
of
the
poor
system.
Then
we
have
multiple
side
like
s3.
Gateway
comes
up
as
a
different
service,
so
there
are
four
or
five
these
micro
services,
which
will
come
up
many
instances
of
them
which
will
then
for
CSI.
D
We
have
another
service
now
so
on
and
so
forth.
So
primarily,
there
are
two
highly
available
services.
There
are
three
instances
of
also
a
manager
and
three
instances
of
the
sem
they
are
replicated.
Why
are
asked?
And
then
the
data
outs
run
application
protocol
whenever
we
write
data
and
then
that
is
pretty
much
it
in.
A
D
E
A
It
what
since
it
you
know,
the
the
main
main
line
scenario
here
is
to
use
raw
disks.
Well,
you
know,
if,
with
this
integration
effort
here,
I
would
love
to
see
that
you
know
some
there's
there's
a
common.
You
know
rook
framework
and
the
Brooks
said
it
like
a
set
of
types
and
inspects
and
some
logic.
There
are
comments.
All
the
storage
providers
and
I
would
love
that.
You
know
stuff
also
uses
rod,
disks
and
I
think
it's
got
some
logic.
A
That's
still
in
the
Ceph
code
itself
around
managing
some
of
those
raw
devices
and
selecting
them
and
discovering
them,
and
all
that
stuff
I
would
love
to
see.
You
know
the
code
that
is
common
there
that
can
help
other
storage
providers.
I
would
love
to
see
that
moved
into
that
common
area
so
that
you
know
that
means
less
code
that
you
all
would
have
to
writes
in.
You
know
get
the
benefit
of
of
that
logic
for
the
greater
community.
That
would
be
a
great
that
and
GFS.
B
D
D
A
That's
that's
a
very
reasonable
and
safe
approach
that
I
agree
with
that.
Oh
yeah,
this
is
exciting.
I,
think
that
you
know
the
work
that
you
guys
have
done
here
so
far
to
to
you
know,
write
this
up
and
get
this
initiative
spearhead
and
I
think
it
is
really
great
too.
So
you
know
I
appreciate
the
work
that
you
all
I.
D
Have
looked
at
all
the
design
documents
and
I
can
I
mean
since
design
documents
are
outstanding.
I
must
say
that
whoever
wrote
it
has
put
a
lot
of
work,
a
lot
of
thought.
So
thank
you
for
that.
But
you
know
casandra's
also
is
very
good,
so
I
can
actually
produce
something
like
that
at
that
level
of
the
table
or
I
can
do
something
like.
D
B
Think,
ultimately,
I
mean,
if
we
can
understand
how
it
is
own,
will
integrate
with
rook
and
understand
what
the
operator
means
to
do.
That's
the
important
part
there
could
be
varying
levels
of
diving
in
that
that
may
be
useful.
Some
up
I
may
depend
on
the
operator,
I'd
say:
there's
no
single
answer.
D
A
Yeah
yeah,
manao
and
cockroach
are
both
very
simple
simplistic,
like
initial
I'll,
very
pre-alpha,
type
of
implementations
and
urban.
A
Serb
is
a
good
starting
point
to
figure
out.
You
know
where
all
the
touch
points
and
integration
points
are
for
how
to
integrate
with
rook
it's
they
serve
as
good
examples.
For
that.
D
Mind
I
propose
that
I
would
also
write
a
getting
started,
plug-in
document
because
he
had
a
look
at
source,
so
maybe
somewhere
in
the
beginner
talks
how
to
write
a
plug-in
based
on
these
examples
that
were
to
go
look
at
code.
We
just
did
that,
so
we
even
write
that
up,
and
it
is
something
that
is
interesting.
We
can
yeah.
B
E
The
POC
of
the
operator
I
think
I
push
to
I
can
share.
Actually
you
are
on
github,
but
so
what
I
did?
I
I
created
a
new
operator
based
on
mainly
a
based
on
the
menial
code,
but
because
we
have
a
lot
of
kubernetes
resources.
I
used
an
approach
from
the
cept
CSI,
where
all
the
old
base,
EML
files
are
loaded
and
just
customized.
A
A
What
the
integration
looks
like
you
know
what
what
sort
of
artifacts
get
created
by
the
operator
may
be,
what
the
CRD
would
look
like
for
what
the
user
would
be
able
to
configure
and
set
and
stuff
like
that,
just
to
get
an
idea
of
the
experience
you
know
that
high
level
stuff
I
think
would
be
a
great
starting
point
and
we
can
add
comments
there
and
and
iterate
over
it.
There.
Okay.
D
A
D
A
G
Yes,
hi,
hey
there,
hey
Travis,
hi,
guys,
I,
think
I
think
at
least
from
the
point
of
view
of
the
issue.
The
issues
was
creative
some
time
ago
and
probably
not
enough
information
there,
but
DPR
I
opened
it
today,
so
it
might
be
like
there's,
there's
a
doc,
there's
an
actual
doc
behind
it.
If
we
prefer
I,
don't
know
how
you
use
it,
but
can
you
you
being
tired
thing
that
you
yeah
this.
G
Perfect
perfect,
so
can
I
take
five
minutes
and
tell
you
about
no
buzzer.
Please
do
okay,
okay,
okay!
So
thanks
guys,
I'm
excited
to
to
like
do
this
and
and
get
started
with
rook
here
and
I'll.
Give
you
just
like,
like
a
one-minute
history
of
what
got
us
here
and
then
I'll
tell
you
what
I
did
and
what
we
find
I'm
doing.
G
G
The
problems
of
you
know,
multi-cloud,
for
getting
rid
of
lock
in,
like
vendor
lock-in
for
the
cloud
or
hybrid
cases
etc.
So
we
never
took
the
approach
of
being
the
only
storage
in
the
world.
We
just
you
know
we
always
were
aggregating
storage,
so
you'll
see
that
I
guess
in
the
design
that
we
have
like
a
backing
stores
and
we
have
policies
and
we
try
to
make
that
these
policies
and
the
experience
to
using
that
the
major
thing
about
the
product.
So.
G
New
by
itself,
so
it's
it's
always
been
through
the
development
party.
It's
always
been
deployed
as
a
VM,
and
we
just
you
know
like
the
last
six
months
ago,
we
we
win
running
on
kubernetes
and
OpenShift
as
a
platform.
So
that's
I'm
I
would
say
it's
new
for
us
because
we
didn't
see,
we
did
see
it
along
the
way,
but
we
actually
deploying
it
to
a
race
was
is
new
for
us
at
least
so
it's
I
think
we
don't
have
a
lot
of
components.
I
can
actually
point
you
up.
G
If
you
can,
if
you
can
go
through
that,
there's
an
architecture
slide
here.
Can
you
click
that
I
hope
you
will
open
up?
Okay
with
the
yeah,
maybe
just
the
wrong
one:
yeah
yeah?
Okay,
so
that's
good
I
mean
the
I
just
wanted
to
just
give
you
a
high-level
view
of
what
it
is.
So
we
connect
to
cloud
resources
and
we
connect
to
file
systems
which
we
just
use
as
storage
as
a
backing
storage,
and
we
have
a
brain
on
the
left.
G
There
there's
a
brain
it
looks
like
so,
and
the
point
about
the
architecture
is
that
we
provide
like
a
rich,
capable
policy
engine
and
metadata
server
that
is
capable
of
the
data
placement
very
flexibly.
So
you
can
decide
every
piece
of
data
where
it
should
go.
Put
it
there.
We
do
the
loop
compression
encryption
on
every
piece
of
data
and
the
isle
path.
G
Data
path
here
on
the
right
does
not
go
through
the
brain,
so
it
does
go
through
the
brain
for
allocations
and
instructions,
but
then
the
actual
data
is
goes
directly
to
the
resources
itself.
So
that's
the
basic
thing,
not
too
many
components.
The
main
component
is
the
brain.
The
other
is
the
top
right
one,
which
is
the
endpoint
providing
s3
endpoints
for
objects,
store
access
and
the
cloud
storage
is
provided
externally
as
an
API
and
the
for.
G
F
G
Okay,
so
just
a
high-level
architecture
and
know
the
plan
with
nobody
is
to
provision
it
for
environments
which
are
which
have
a
relation
to
hybrid
and
multi
cloud
environments,
so
at
first
stage
at
least
in
the
design,
doc.
I
didn't
go
to
that
specific
deployment,
just
because
I
actually
wanted
to
to
show
you
at
least
one.
G
What
was
the
first
step
and-
and
you
know,
just
get
a
feedback
and
see
how
that
looks
like
and
how
look,
how
that
I
mean
how
how
that
approach
I
mean
of
deploying
new,
by
which
I
think
is
less
of
an
effort
like
you
said
before.
I
guess
it
doesn't
need
to
manage
local
volumes.
It's
whenever
we
take
filesystem
is
taking
them
with
a
PVC
and
when
we're
taking
other
object
stores-
or
you
know,
we
can
take
any
key
value
stores.
G
Basically,
so
when
we
take
storage
from
other
api's,
then
we
we
just
use
an
API
like
a
restful
guy
HTTP
base.
So
it's
it
was
in
that
sense,
and
there
is
another
caveat
there
or
maybe
like
a
capability
that
we're
trying
to
push-
and
you
guys,
probably
know
that
which
is
the
ideas
of
providing
like
an
object
bucket
claim
analogous
to
to
the
persistent
volume
claim,
which
we
at
least
believe
it
makes
sense
and
we're
not
saying
it's
the
only
way.
We
also
like
provide
the
provisioning
using
s3
and
s3.
G
Try
to
answer
that,
so
there's
there's
a
few
things
here
in
the
in
the
design,
doc.
There's
the
backing
stores
in
the
there's
a
bucket
provision
or
exactly
so
the
bucket
the
bucket
provisioning
is
describing
the
the
general
approach
of
how
new
by
itself
is
being,
is
answering
about
object,
bucket
claims,
okay,
that's
like
the
the
front
and
two
to
provide
bucket
claims
to
applications.
G
So
that's
that's
the
front
line
pack
for
object.
Packaging
on
the
backing
storage
piece.
I
can
take
you
back,
maybe
two
to
the
yeah.
So
there's
the
backing
stores
here,
yeah.
So
basically,
there's
there's
a
options
here.
Maybe
I
should
have
pasted
some
more
example
here,
there's
another
the
CRD
example.
If
you
can
open
that
from
from
the
front
page
as
well
of
the
design,
the
first
one
number
one,
the
system,
Zod
thanks
I
can
drive
as
well.
If
that's
annoying
thanks
no.
G
So
if
you
drill
a
little
bit
to
the
full
spec
example,
you
see
that
right.
So
you
can
see
here
the
backing
store
piece
here,
like
the
object
in
the
spec.
So
there's
multiple
ways
of
defining.
How
do
we
connect
right?
So
this
is
just
the
native
one.
So
if
I
have
a
diverse,
s3
and
I
want
to
connect
to
it,
I
can
just
provide
like
a
bucket
name
and
region
and
credentials
secret.
G
That's
like
the
native
way
of
saying
to
where
I
should
connect
to
and
then
Nupur
connects
it
without
a
problem
and
the
other
option,
if
you
drill
a
little
bit
down
below
or
maybe
just
cover
the
arrest,
so
there's
like
s3
compatible,
there's
Google
there's
a
strobe
lob.
So
all
these
are
just
supported
by
the
product
once
there's
the
credentials
for
that,
then.
G
Not
proxying
the
API
calls
as
like
one
to
one
the
point
that
we
were
making,
so
we
have
like
okay,
let
me
slow
down,
we
have
multiple
ways
of
operation
and
the
idea
is
to
provide
a
flexible
solution.
The
native
way
of,
or
maybe
the
normal
way
that
we
operate,
is
that
once
an
uber
bucket
gets
an
upload
or
a
read
whatever
the
operation
is
for
objects,
we
don't
just
you
know
we
direct
the
rethink
to
another
object,
store.
We
ingest
that
in
our
endpoint
and
we
do
the
loop
variable
they
do
for
that.
G
We
cut
it
to
chunks.
We
compress
encrypt,
get
our
check,
sums
and
everything
else
on
like
a
metadata
of
that
chunk,
and
then
we
store
chunks
encrypted
chunks
on
on
storage
on
backing
stores,
and
the
idea
was
to
provide
a
means
of
separating
encryption
keys
from
data
providing
efficiency
with
compression
compression
deduplication,
I.
F
Yes,
that
you
use
for
the
actual
storage
and
whatever
format
your
setting.
You
know,
after
in
fixing
and
ingestion
happens,
and
all
of
that
stuff,
you
can
use
other
backing
stores
and
you
can
configure
them,
but
you
also
talked
about
having
a
at
CRD,
that's
a
claim
and
being
able
to
bind
that
to
different
backing
stores.
Those
seem
like
different
things
to
me.
Could
you
maybe
just
address
that?
Yes,.
G
So
let
me
say
this:
the
core
capability
of
the
product
is
to
to
connect
to
all
the
provided
storage
right.
That's
that's
what
we
were
trying
to
do
to
create
policies
between
them,
whether
it's
steering
whether
it's
mirroring
and
make
it
location
where
it's
ever
so,
just
creating
a
multi
cloud,
hybrid
cloud
environment
which
works
with
these
policies.
Okay,
so
that's
like
the
basic
thing
you
can
configure,
such
as
ocean
like
without
bucket
claims.
You
can
configure
it
just
by
saying
here's
a
bucket
there's
the
credentials.
G
Here's
another
credentials,
just
work
it
out
and
that's
how
Google
is
always
being
deployed
and
working
and
we
have
a
UI
that
can
provision,
so
it
can
help
to
set
up
everything.
I
can
I
even
taste
it
like
few
to
YouTube
videos,
I,
think
that
you
can
see
it
if
you
want
on
the
main
design,
so
it's
fully
capable
of
doing
it
without
any
bucket
provisioning.
G
However,
the
reason
why
we
think
provisioning
adds
to
the
to
the
total
solution
is
that
that,
in
the
in
essence,
in
inside
kubernetes,
there
is
no
representation
of
these
operations
of
getting
a
bucket
to
provision
it.
So
it's
like,
you
have
an
application.
It
wants
to
use
a
bucket.
What
do
you
do
you
go
to
the
object,
store
you
provision
it
you
create
a
bucket
in
its
you
are,
and
then
you
take
the
credentials
and
you
go
to
application
configuration
and
you
put
it
there
right.
So
it's
like
it's
always
I'm.
B
F
G
F
G
So
if
we
propose
both,
we
believe
I
mean,
at
least
from
our
point
of
view.
There
is
no
point
in
saying
just
you
know:
either
one
makes
sense
or
the
other
one
doesn't.
So
if,
like
you,
have
an
application
and
the
application
should
work
in
that
environment
of
a
multi
cluster
hybrid
cloud,
then
it
makes
sense
to
connect
to
the
bucket
to
the
new
bucket
without
claim
and
for
provisioning
like
for
easier
provisioning
of
Dubai
itself,
with
the
backing
stores.
G
F
In
the
case
that
you
provision
in
uber,
you
know
buckets
you're,
actually,
intermediating
calls
and
doing
the
data
plane.
You
know
and
ingestion
potentially
sharding
everything
you
need
to
do
across
your
button
talking
stories
and
in
the
case
that
you
provision
in
s3
bucket
directly
you're
talking
directly
to
a
stream
there's
nothing
in
between.
G
F
F
G
D
G
D
G
So
there's
two
pieces
here:
one
piece
is
what
we're
suggesting
to
do
in
the
short
term
and
the
other
is
in
the
longer
term.
So
I'll
tell
you
in
the
short
term.
The
the
thing
is
that
once
we
move
to
companies,
we
sort
of
put
aside
our
H
a
clustering
solution
which
was
based
on
on
the
end,
and
we
had
to
put
it
aside
for
just
a
few
releases
because
it
wasn't
like
it
was
holding
us
back.
So
for
the
time
being,
we
are
relying
on
Peavey's,
which
can
provide
h8,
okay.
D
G
D
C
D
You
scale
the
noblecorp
am
I
making
a
request
to
a
new
Bako
and
there
is
a
new
Bacoor
on
every
data
node
or
every.
Like
suppose,
I
have
a
hundred
node
cluster
and
this
cluster
I
have
one
hundred
or
two
thousand
containers
trying
to
make
requests
to
the
new
buck
or
one
machine
will
soon
run
out
of
the
bandwidth
in
Chungking.
So
how
do
you
guys
scale
at
all?
Okay,.
G
It's
responsible
for
doing
all
the
CPU
heavy
network,
heavy
operation
right
so
chunking.
Encryption
of
these
processing
stream
processing
that
occurs
to
the
data
runs
in
their
endpoint
and
that
endpoint
is
a
stateless
component,
so
it
it
can
be
scaled
horizontally
now
with
any
of
the
horizontal
scaling
tools.
G
The
only
thing
that
does
limit
the
ability
to
scale
up,
but
it
doesn't
limit
it
forever-
is
the
calls
that
these
endpoints
are
doing
to
the
brain,
and
these
calls
are
essentially
very
limited
both
in
size
and
in
the
effort
that
needs
to
be
doing
to
process
them.
They
are
calls
like
allocate,
give
me
in
a
location
where
should
I
put
this
data
and
that
data
or
persist
this
metadata
for
me,
etc.
G
So
it's
it's
like
a
database
and
it
will
scale
like
a
database
right
and
we
are
using
MongoDB
internally
for
these
matter
and
MongoDB.
It
provides
scalability
you
can
for
these
sort
of
workloads
and
therefore
we
have
the
pieces
for
scalable
architecture
in
that.
Does
that
work
for
you?
Does
it
make
sense.
D
G
D
G
A
This
this
is
this
sounds
good.
It's
also
really
exciting,
as
well,
both
nuba
and
also
ozone
today.
So
though,
like
we,
the
Nuba
design,
pull
request
is
already
open
now,
so
we
can,
let's
follow
up
on
there,
because
we
only
have
about
five
minutes
left
and
the
call
we
still
have
some
other
agenda
items
to
get
through,
but
definitely
thank
you
very
much
for
for
showing
this
work
off
today
and
getting
that
pull
request
open.
We
can
continue
that
it's
kind.
A
B
The
CI
yeah
just
pointing
out
that
there's
work
in
progress
to
move
the
CI
a
to
an
account
where
more
contributors
can
have
access.
So
thanks
to
Chris
from
upbound
and
others.
Working
on
that,
it's
I
know
it's
in
progress.
There's
some
DNS
issues
to
resolve
to
to
finalize
that,
but
and
part
of
that,
then
hopefully,
after
migrated,
we
can
get
more
build
agents.
That's
kind
of
the
biggest
pain
point
right
now
around
the
CIA.
Besides
some
stability
issues,
but
anyway
it's
in
progress
and
we're
paying
attention
to
it.
Travis.
B
Appreciate
it
thanks
so
yeah,
hopefully
they
get
back
to
us
soon
and
it
sounds
like
it's
a
pretty
quick
thing
to
finish
off
that
migration
and
yeah
that's
good
first
ti4
now.
The
next
point
I
wanted
to
bring
up
is
as
a
follow-up
to
coop
con
some
discussions.
We
had
there
I
am
working
on
the
sort
of
the
rook
charter
dock
that
says:
hey
here's,
where
we
want
work
to
go.
Here's
how
we
support
storage
providers
coming
into
the
root
community.
B
There's
lots
of
thoughts
around.
What
is
what
does
the
framework
look
like
you
know
how,
anyway,
what
does
that
look
like?
So
the
you
know
the
goal
being
well.
Whatever
is
common
between
the
storage
providers
definitely
want
to
be
clear
about
how
storage
providers
can
benefit
from
that
commonality.
Have
common
libraries
to
help
with
that
we've
got.
B
We
talked
about
some
of
that
already
around
or
earlier
in
this
meeting
the
while
also
the
goal
is
to
give
independence
to
each
storage
provider
in
in
multiple
ways,
so
I
think
storage
providers
in
general
would
be
interested
in
defining
their
own
release.
Schedule
running
independent
integration
tests
and
CI
having
control
over
pushing
to
the
repo
and
and
some
of
that
is
in
Jerry's
governance
update
today.
B
F
B
A
It
sounds
good
and
so
they're
building
on
that
over
the
weekend
it
opened
up
a
pull
request
that
updates
the
governance
with
a
new
change
approval
process
that
would
allow
that
enables
two
new,
like
a
reviewer
and
approvers
type
of
roles
for
the
project.
So
currently
only
the
maintainer
team
can
approve
and
merge
pull
request,
but
will
be
expanding,
that
to
more
contributors.
Excuse
me
in
the
community
so
that
each
storage
provider
can
have
you
know
autonomous.
A
You
know
the
independence
to
be
able
to
work
on
their
code
code
areas
and
unblock
that
and
improve
their
velocity.
So
this
was
a
long
time
coming
and
it
is
published
now
and
Travis
has
provided
some
feedback
Alexander.
If
you
have
any
feedback
as
well.
Please
please
add
that
Bassam
as
well-
and
you
know
it
will
continue
to
drive
that.
B
You
know
what
to
do
around
channels
and
and
this
tag
specifically
because
then
it
comes
with
upgrade
implicate
people
are
using
it
where
you
don't
know
exactly
what
the
operator
won't
know
exactly
what
image
it's
using
potentially,
if
someone
gives
it
that
tag,
I,
don't
know
that
we
have
time
to
discuss
it
here
really,
but
if
we
can
follow
up
on
that,
that'd
be
great.
If
people
have
comments
on
it,.
B
B
A
A
Okay-
so
we
are
just
about
out
of
time
here
and
take
your
everyone
for
attending
and
discussing
the
new
potential
storage
providers
and
Rooke
I
think
that's
very
exciting.
Work
I
would
love
to
see
more
additions
to
the
Brooke
project
and
more
contributors
and
keep
this
community
growing.
So
this
is
all
great
stuff,
and
thank
you
very
much
everybody
today
thank.