►
From YouTube: CNCF SIG Storage 2020-02-26
Description
CNCF SIG Storage 2020-02-26
A
A
A
C
A
B
B
Yes,
I
can
see
all
righty
all
right,
let's
go
ahead
and
get
started
all
right,
so
we
we
were
here
two
weeks
ago
at
the
last
storage,
sig
call
and
we
started
the
discussion
around
the
progress
and
effort
that
the
rook
project
has
around
graduation,
the
CNC
F
and
today
we're
going
to
be
doing
a
little
bit
more
formal
of
a
presentation
and
discussion.
So
let's
hop
on
into
that
and
talk
about
what
the
expectations
are
here
for
today.
So,
as
I
mentioned,
we're
gonna,
this
is
a
formal
presentation.
B
We
have
all
of
our
ducks
in
a
row
here
for
the
criteria
we've
done.
The
legwork
we've
gathered
all
the
data
and
we
are
ready
to
have
a
formal
presentation
and
discussion,
and
we
want
to
kick
that
off
today.
Here
with
the
sig,
we
will
have
time
at
the
end
for
a
Q&A
session.
We
have
two
of
the
rook
containers
here
today.
Actually,
I
think
Blaine
is
on
the
line
as
well
too
so
I
think
there's
a
few
rook
people
here
today
to
answer
any
questions
that
the
sig
may
have
in
terms
of
diligence.
B
B
So
we
started
with
just
support
for
Seth
distributed
storage
solution
set,
but
we
since
then
had
created
a
generalized
framework
for
being
able
to
add
and
support
many
different
types
of
storage
solutions
and
storage
providers
that
we'll
get
into
more
here.
So
you
could
think
of
rook
as
a
set
of
operators,
but
also
a
framework
at
a
platform
for
building
and
integrating
distributed
storage
systems
into
kubernetes.
B
The
big
thing
here
to
recognize
is
that
almost
every
single
community
stat
projects
that
it
cetera
has
grown
at
least
to
2
to
3
X
during
the
time
that
we've
been
in
incubation,
so
everything
is
up
and
to
the
right.
The
project
continues
to
grow
and
attract
more
more
more
adopters,
more
contributors,
more
usage,
etc.
So
the
biggest
one
is
you
know.
B
Most
of
them
are
2
to
3
X,
but
the
biggest
one
that
really
calls
out
to
me
is
the
container
download
metric,
which
is
gone
up
10x
since
the
original,
since
we
were
accepted
into
the
incubation
stage,
so
we've
got
some
more
things
to
call
out
here.
Just
the
numbers
are
little
multiples,
two
to
three
X
I'm,
10
X
and
then
I
will
pass
it
off
to
Travis
to
go
into
a
little
bit
more
depth
on
some
of
the
specific
accomplishments
that
the
project
has
had
since
occupation,
yep.
D
D
We
do
have
several
storage
providers
edge
of
s
and
Seth
are
our
two
graduated
storage
providers
or
we've
declared
them
as
stable
in
the
the
CRS.
There
are
others
in
alpha
state,
Cassandra
and
ffs.
You
go
by
DB
and
in
cockroach
is
missing
in
this
list.
Actually,
so
there
are
six
George
providers
and
our
goal
is
to
continue
progressing
them,
add
more
storage
providers
and
provide
a
place
where
people
want
to
come
for
for
storage
and
kubernetes.
D
Our
security
audit
was
completed
by
trail
bits
back
in
December
and
we've
got
the
report
published
for
that
now,
we'll
see
on
the
next
slide
and
yep,
one
I
think
one
one
issue
is
marked
as
critical.
That
was
fixed
quickly
by
the
team
and
fought
continuing
to
follow
up
on
the
smaller
items
there
and
lots
of
other
features
and
improvements
that
we
don't
have
time
to
really
dive
in
today.
D
Next
slide
yep.
So
here's
the
link
to
all
these.
Hopefully
all
these
things
that
we're
looking
for
in
the
graduation
checking
the
boxes
here.
So
our
formal
proposal
jarred
open
that
that
p.r
last
night,
so
there's
yep
there's
the
proposal.
Doc
comments
are
welcome
and
I'm
sure
we'll
get
plenty
of
those
yeah.
So
otherwise
we
updated
our
governance.
So
we
have
work,
has
a
steering
committee
now
of
comprised
of
three
members?
Basically,
graduated
storage
providers
at
work
and
edge
of
that's
or
Stefan
IGFs
have
one
number,
each
plus
Jared.
D
The
CII
badge
passing
criteria
down
to
percent
now
and
then
the
I
think
the
most
exciting
part
of
this
honestly
was
the
adopters
that
we
have
and
collecting
that
data
and
just
hearing
the
excitement
around
hey
Brooke.
You
know
it
helps
solve
all
these
problems
for
storage
in
production,
environments
and
Jared
will
dive
into
some
of
that
next.
B
E
B
You
know
all
the
details
can
be
found
in
the
you
know
in
that
formal
proposal
that
PR
that
we
have
opened
in
the
TOC
repo
PR
366.
All
those
links
there
on
that
slide.
So
I'm
just
gonna
briefly
talk
about
some
of
our
production
adopters
and
some
of
the
key
value
that
they
shared
with
us
for
for
what
they're
finding
from
Brooke.
So
this
is
just
a
this
is
a
slide
here
that
you
get
when
you
put
an
engineer
in
charge
of
collecting
and
distributing
graphics.
B
So
this
is
just
a
bunch
of
logos
from
some
of
our
production
adopters,
but
busy
and
we'll
move
on
so
I
just
want
to
dive
into
a
couple
of
them
here.
That
I
think
have
interesting
stories
to
tell-
and
this
is
by
far
my
favorite
part
of
the
process-
I
really
enjoyed
it
for
the
incubation
stage
and
now
for
graduation
as
well,
is
connecting
to
our
users
and
hearing
about
their
stories
and
what
they're
finding
to
be
very
valuable
and
interesting
and
helpful
from
the
RIC
project.
B
A
B
A
Part
of
the
comment
is
actually
so:
I've
come
across
a
lot
of
confusion.
People
think
that
rook
is
actually
a
storage
technology,
as
opposed
to
something
that
manages
storage
technologies,
and
so
one
should
be
a
little
bit
cognizant
of
making
statements
like
rip
clusters,
etc.
These
are
these
are
probably
not
real
clusters.
They're
safe
clusters
managed
by
rook
and
this
I
misunderstanding
something
yeah.
B
A
B
B
It's
you
know,
deals
with
and
has
end
users
that
are
serviced
by
by
their
services
for
the
entire
population
of
Norway,
so
like,
for
instance,
one
of
the
one
of
the
services
that
they're
using
rook
and
SEF
for
is
is
for
a
digital
document,
distribution
for
everyone
in
the
country
of
Norway,
and
so
roughly
with
people
that
have
not
opted
out
of
that
service.
They
want
to
receive
snail
mail
instead,
I
suppose
is
we're
talking
for
three
or
four
million
users
there.
B
Replicated
is
another
production
adopter
here
and
they're
interesting
because
they
do
a
sass
on
prim
type
of
thing,
where
sass
providers
can
bundle
all
of
their
services
and
everything
that
would
need
to
be
functional
and
hosted
there
have
their
services
hosted
on-premises
into
a
single
kubernetes
distribution
with
everything
it
needs
there
and
replicated
decided
to
make
rook
the
the
default
add-on
there
for
storage
in
those
those
clusters?
Those
distributions
that
software
vendors
want
to
use
to
ship
their
software
and
have
it
run
on
premises
and
air-gapped
and
installations,
environments,
and
things
like
that.
B
So
I
thought.
That
was
it's
a
useful.
It's
an
interesting
story
here,
because
it's
not
about
how
much
storage
is
or
how
big
of
a
cluster
is
replicated
managing
for
their
services,
but
the
fact
they
bundle
it
in
as
the
default
storage
option
for
their
own
kubernetes
distribution
to
multiple,
multiple
customers,
customers
that
they
sell.
That
product
I
thought
that
was
very
interesting.
B
I
mean
keep
going
pretty
fast
here,
but
discogs
is
another
one
that
is
building
a
one
of
the
most
comprehensive
and
largest
online
music
databases
in
marketplaces
in
the
world,
and
so
they're
they're
servicing
millions
of
users
across
the
globe
as
well,
and
depending
on
brook
storage
brook
or
storage,
orchestration
and
I
believe
they're
they're
using
staff
as
well
too.
It's
a
stable,
safe
distribution,
and
you
know
like
one
of
the
things
we
hear
from
people.
A
lot
too,
is
how
you
know
the
capability
is
provided
by
the
orchestration
services.
B
B
I
think
fiddly
connect
is
another
person
that
you
know
is
this
common
theme
that
we
hear,
besides
being
you
know,
faster,
easier,
cheaper.
Another
common
thing
we
see
here
is
that
the
dedication
and
commitment
that
the
real
community
as
a
whole
has
provided
to
making
sure
that
we
are
backwards
compatible
or
that
upgraded
migration
process
is
always
in
place.
So
we
you
know
wanted.
B
Even
while
we
were
still
out
so
I
was
really
happy
with
the
community's
dedication
to
making
sure
that
you
know
critical
systems
are
running
for
adopters
of
rock,
and
we
got
a
couple
more
here
that
we'll
just
try
to
call
out
a
couple
of
different.
You
know
highlights
here
so
the
center
of
excellence
for
next
generation
networks
that
are
up
in
Canada
and
they're
they've
been
involved
since
alpha
days
as
well.
So
you
know
so
they're
been
pretty
pleased
with
the
maturity
of
the
project
and
we're.
You
know.
B
Orchestration
services
provided
by
Brook
have
gotten
to
so
far,
and
you
know
there
there
are
users
of
you
know
some
of
the
other
storage
options
in
Brook
such
as
HFS
and
Cassandra,
cockroach,
GB,
etc
and
they're.
One
of
the
ones
that
are
pleased
to
have
that
as
an
option
to
augment
these
services
provided
by
you,
know
Saif
as
a
storage
option
in
Brook.
B
See
and
Avicii
I
think
was
interesting
because
they
know
what
they
really
wanted
to
share
and
they
say
that
the
story
is
that
they
love
telling
people
about.
Rook
is
that
they
have
gone
through
multiple
disaster
scenarios,
not
on
purpose,
but
they
seem
to
have
had
some
bad
luck
with
hardware
data,
centers,
outages,
etc
and
network
issues.
All
sorts
of
things
that
Brooke
and
it's
orchestration
services
for
storage
backends
was
able
to
handle.
E
B
Keep
things
you
know
healthy,
recovered
and
data
lost,
etc.
So
these
guys
seem
to
be
unlucky
with
some
of
the
issues
that
they've
run
into
hardware.
Wise
and
natural
disaster
lies,
but
they
they
are
very
happy
with
the
reliability
and
stability
that
they've
gotten
from
services
offered
by
the
Brooke
project
and
then
the
last
one
here
geo
data
I
thought
was
interesting
because
they
had
evaluated.
B
A
while
ago,
I
don't
know
exactly
when,
but
it
was
very
early
on
and
and
then
I've
tried
some
other
storage
options
in
the
cloud
native
ecosystem,
and
they
just
recently
did
another
revisit
Turek
and
are
very
pleased
with
the
maturity
and
the
progress
that
the
buret
project
has
made,
since
they
first
tried
it
very
early
on
and
didn't
get
it
quite
exactly
what
they
were.
Looking
for.
B
B
Geo
data-
I,
don't
I
I,
don't
know
exactly
which
one
they're
using
actually
that
one
is
escapes
me
and
it's
in
our
spec
spreadsheets.
But
but
you
know
the
for
the
two:
stable
declared
stable
storage
fighters
and
stuff
with
edge
FS
ancestor
either
you're
using
one
of
the
other
and
I
think
it's
probably
safe,
but
I.
Don't
I
would
have
to
look
that
out
dream.
Okay,.
B
B
So
a
lot
of
times
you
see
direct
access
or
direct
consumption
of
the
storage
primitives,
you
know
object,
storage
as
well
is
another
one,
but
then
you
do
see
I
don't
have
a
breakdown
in
terms
of
the
numbers
of
them,
but
we
are
definitely
people
that
are
you
know,
deploying
databases
in
using
persistent
volumes
that
are
surfaced
by
the
storage
providers
and
rook
to
be
able
to.
You
know,
have
higher
levels
of
storage
systems
that
are
writing
to
these
lower
level,
primitives
that
are
provided
by
the
drug
storage
services.
G
B
B
You
know,
have
a
valuable
platform
so
that
if
people
developers
or
creators
of
storage
systems
want
an
easier
story
or
an
easier
on
ramp
to
be
able
to
integrate
their
storage
systems
into
kubernetes
or
cloud
native
environments
that
that's
what
rook
provides
so
in
terms
of
what
users
want
to
provide.
If
you
need
storage
and
you
want
a
data
base
or
you
want
a
file
block
and
object
storage
or
whatever
you
may
want
there,
you
know
we
are
happy
for
that
to
be
coming
from
the
orchestration
services
and
the
storage
providers.
B
That
Brooke
has
integrated,
but
you
know
using
other
operators,
is
perfectly
reasonable
story
as
well.
You
know
there
are
some
pretty
solid,
like
the
my
sequel
operator,
that's
a
couple
of
those
or
Postgres
operator
or
whatever
it
may
be.
If
you
want
to
run
platform
services
of
data
services
in
cluster,
you
know
that
that's
it's
totally
reasonable
to
choose
the
best
tool
for
the
right
job,
but
in
terms
of
what
the
rook
Charter
is
all
about.
B
Is
you
know,
providing
a
a
home
and
a
framework
in
reasonable
logic
and
processes
and
a
way
for
storage
systems
to
not
only
integrate
into
kubernetes
but
to
also
evolve
and
mature,
as
well
and
kind
of
follow
that
template
in
the
game
plan
that
we
accomplished,
with
both
Safi
and
Edgefest,
to
be
able
to
have
a
reliable
and
stable?
You
know
offering
within
kubernetes
and
within
you
know
in
cluster,
that's
refreshing,
yeah,.
D
Yeah
something
I
like
that
too,
is
that
I
mean
look
at
the
end
of
the
day.
So
we
are
the
management
plane
as
discussed
earlier
and
which
basically
means
we
have
an
operator
that
manages
these
storage
providers.
So
we
have
an
operator
that
manages
Ceph
an
operator
that
manages
edge
of
s
so
different
operators
that
runtime
manages
the
individual
storage
layer.
D
So
if
somebody,
you
know
the
scenarios
are
gonna,
be
so
different.
Oh
I
need
a
file,
an
object
or
block.
Well
then
who's
chef.
They
if
they
want
Cassandra
well,
then
they
can
run
the
Cassandra
operator
and
it's
up
to
the
admin
to
decide.
You
know:
is
this
operator
ready
for
production?
Can
I
use
it?
D
Our
Cassandra
operator
is
still
in
alpha.
We're
working
on.
You
know
progressing
it,
so
it's
more
production
ready.
So
someone
might
choose.
Oh,
it's
not
quite
ready,
let's
yeah,
let's
get
the
community
going
there
and
that
that's
our
goal.
You
know
get
these
progressing,
so
people
want
to
use
each
of
these
operators
in
in
production.
B
And
it's
nice
that
they
can
independently,
you
know,
have
which
their
own
maturity
and
evolution
of
each
project.
You
know
they
each
have
their.
You
know
alpha
beta,
stable
declaration,
independent
of
each
other.
So
you
know
it's
it's
Brooke
as
a
as
a
home
for
a
set
of
you
know:
common
functionality
or
common
implementations
for
storage,
riders
and
kubernetes
environments,
but
they,
you
know,
have
some
of
their
own
unique
traits.
They
are
bringing
us
all
their
own,
unique
use
cases,
but
you
know
bringing
them
together
in
a
single
home.
B
H
A
A
H
D
You're
asking
about
the
CI
or
excuse
me
test
validation
to
you
know.
How
do
we
know
it's
production
ready,
that's
easily
your
kind
Yeah
right,
yeah,
the
the.
What
we
have
in
place
in
the
CI
is
that
now
for
every
PR,
every
master
build
every
release
build.
We
run
this
suite
of
integration
tests,
which
gives
us
confidence
that
ok,
the
feature
is
working
and
to
end
it
doesn't
get
necessarily
well,
it's
definitely
not
a
scale
test.
It's
not
a
real
scale
test.
Well,
yeah!
It's
not
scaling
it's
not
under
stress,
but
it's
basic
functionality
testing.
E
B
One
more
note
on
that
that
that
into
end
you
know
e
to
e
functionality.
We
have
there
that's
a
common
set
of
functionality
as
well.
So
you
know
any
storage
writer
that
is
integrating
its
communities
with
you
know
through
the
rook
project,
has
a
you
know
all
the
platform
and
the
infrastructure
there
to
at
you
know,
in
the
into
NCI
flow,
to
be
able
to
dynamically
bring
up
in
environments
to
run
the
storage
deployed
it
to
run
some.
B
You
know,
faceook
sanity
testing
or
more
comprehensive
use
cases,
but
all
that's
you
know
common
and
something
that
the
Brooke
framework
offers
to
make
it
less
of
a
burden
to
be
able
to
do
into
n
testing
for
new
storage
rider.
They
could
take
advantage
that
commonality
and
that
functionality
that
the
Brooks
you
know
the
testing.
E
H
D
From
my
perspective,
I
don't
know
that
we've
had
many
questions
around.
That
I
mean
when
rook
is
the
management
plane.
Brooke
doesn't
modify
itself
in
any
way.
So
we
just
pick
up,
you
know
the
surfs
docker
image
and
and
then
we
can
deploy
that
and
work
with
that.
So
since
Brooke
doesn't
modify
itself
in
any
way.
D
A
A
H
A
Surf
does
everything
so
where
do
we
sit
on
that
spectrum?
And
the
second
question
relates
to
kind
of
high
availability
and
disaster
recovery
kind
of
stuff,
as
you
pointed
out
earlier,
with
some
of
your
existing
adopters,
when
you
really
really
really
need
to
rely
on
surfers
when
things
are
not
so
good
when,
when
you're
having
to
you,
know,
restore
backups
and
handle,
you
know
pretty
bad
situations,
sometimes
so
how
resilient
is
cepheid's?
Sorry,
not
so
rook
itself,
I
might
have
used
the
word
safe
instead
of
rook
a
few
times,
then
my
apologies.
A
How
resilient
is
rook
to
these
kind
of
things?
So
what
what
sort
of
storage
back-end
does
it
have,
for
example,
how
resilient
is
it
to?
You
know
really
bad
situations
in
clusters
like
network
overloads
and
these
kinds
of
things,
which
is
when
you
know
people
really
rely
on
on
rook
to
keep
their
storage
shelving
yeah.
B
Thanks
button
I'll
going
to
take
the
first
question
and
then
I'll
defer
to
Travis
for
the
second
question
so
in
terms
of
you
know,
authoring
or
creating
a
new
storage
writer
and
integrating
it
in
with
rook.
You
know
in
full
and
full
transparency.
I
would
say
that
on
that
spectrum
it
is
more
towards
the
side
of
having
to
do
some
unique
work
for
that
particular
storage
provider.
Then
I
think
we
want
to
be
at
long-term.
You
know
the
initial
investment
of
the
abrooke
framework.
B
Like
the
storage
placement
and
selection
stuff
is
common,
but
then
you
know
operational
tasks
like
you
know,
fail,
overs
or
doing
back
backups
restores,
or
maybe
some
policy
stuff
I.
Think.
That's
where
we
want
to
make
more,
we
need
to
make
more
of
an
investment
going
forward
and
continue
to
build
on
the
common
general
framework
that
we've
done,
but
currently
Quentin
the
store
at
new
storage
providers
do
more
of
that
themselves.
That
I
think
we
want
to
be
at
long-term.
So
it's
an
ongoing
investment.
D
Yeah
one
more
comment
add
to
that.
Maybe,
at
the
end
of
the
day
a
storage
layer
like
set
for
edge
of
Fez,
they
have
very
individual
needs
as
far
as
how
they're,
orchestrated
and
so
even
with
common
refactoring
or
common
helpers,
to
create
the
operators,
most
of
the
time
will
be
spent
on
what
that
operator
that
storage
provider
needs
to
orchestrate
it.
D
So
it's
not
like
we
will
have
a
common
rookie
API
that
lets
you
deploy
anything
someday
like
that.
That's
just
not
too
cool,
but
yeah
any
other
questions
on
that
before
I
go
to
the
disaster
recovery,
okay.
So
as
far
as
resiliency,
my
rook
builds
on
everything
trooper
Nettie's
that
we
possibly
can
that
we
rely
on.
You
know,
starting
up,
you
know
running
pods
like
we
don't
start,
we
don't
rely
on
running
continued
ourselves.
D
Well,
we
we
created
deployment,
which
then
manages
the
pod
lifecycle
for
us
or
our
faithful
set
or
whatever,
whatever
the
true
neighs
resource
that
will
give
us
the
reliability.
In
that
scenario,
that's
that's
what
we
manage
and
as
Luca
Street,
so
the
you
know,
rook
itself
implements
and
depending
on
the
operator,
a
higher
level
sort
of
health
check
for
the
storage
provider
like
one
place
we
have
in
SEF
is
well
the
SEF.
D
Has
you
know
these
bonds,
which
are
basically
the
brains
of
the
system
that
need
to
maintain
quorum
at
all
times
throughout
the
storage
platform?
Is
down
so
kubernetes
will
make
sure
that
those
those
demons
or
those
pods
keep
running
and
will
restart
if
they
fail
and
things,
but
if
somehow
they
get
stuck,
and
you
know
nothing
or
the
basic
health
check
or
liveness
probe
isn't
working
well,
the
operator
will
manage
that
and
say:
oh,
they
really
aren't
responding.
B
What
we
do
and
something
to
that
I'd
add
on
that,
is
that
you
know
the
Brooke
projects
has
been
around
for
three
years
now
over
three
years
now,
and
you
know
the
the
lessons
that
we've
gotten
to
learn
for
being
exposed
to
a
pretty
large
community
running
Rock
and
a
fairly
broad
large
variety
of
scenarios
has
been
invaluable
to
be
able
to
increase
the
robustness
in
the
stability
of
the
orchestration
side
itself.
Of
that
control
play
inside.
D
More
comment
that
you
know
we
do
even
when
kubernetes
criminais
is
itself
fails
in
the
whole
cluster
just
went
sideways.
We
have
guides
that
require
a
lot
of
manual
processes
today,
but
you
know,
since
the
data
is
persisted
to
disk,
you
know,
in
many
many
times,
people
are
able
to
recover
their
data
when
they
restore
kubernetes.
D
A
D
C
D
E
D
D
E
So
the
point
is
we
don't
know
yet
what
we
need,
especially
for
me,
who
is
very
new
to
that.
Just
a
couple
of
weeks.
I
was
tasked
to
work
on
that
and
that's
not
something
I
worked
on
before
so
I
might
need
it
so
do
I
have
to
have,
for
example,
I
mean
what
are
the
I
mean?
I
joined
a
little
bit
later,
so
I'm
sure
that
there
are
benefits
of
work
that
you
guys
mentioned,
that
I
I
missed,
but
just
wanted
to
estimate
the
amount
of
effort.
That's
required
to
integrate.
A
E
A
Or
the
rock
slack
channel
or
whatever
the
team
thinks
is
most
appropriate.
There
is
that
okay,
yeah
yeah
sure
of
course,
okay
right.
Definitely
so
I
just
wanted
to
wrap
up
the
rook
part
of
the
discussion
and
then
I
think
sods
got
some
updates
for
us
on
the
harbor
due
diligence
deleted
ten
days
ago,
so
I'm
gonna
suggest,
unless
anyone
has
any
alternative
suggestions
that
we,
so
we
have
four
weeks
roughly
until
Q
Khan.
What
I
would
like
to
do
is
have
anyone
raise
concerns
they
have.
Obviously,
today
I
haven't
heard
any
concerns.
A
I've
heard
questions,
but
no
major
concerns.
I'd
say:
let's
give
that
another
week
until
next
Wednesday.
If
there
are
no
concerns,
I
think
we
will
make
sure
that
one
or
more
of
the
TLS
look
over
this
but
I
I
have
not
seen
any
holes
and
the
due
diligence
it's
being
performed
up
to
now,
and
then
we
call
a
vote
unless
there
are
any
major
objections
in
less
than
two
weeks
time,
just
to
give
the
TSE
more
than
two
weeks
before
keep
corner
of
them
to
to
finalize
the
vote.
B
Yeah,
it
sounds
great
to
me
equipment
and
I
definitely
really
appreciate
as
well
the
attention
to
our
desired
timeline
as
well.
That
was
that's
really
really
great.
That
you're,
you
know
willing
to
work
with
that
and
hopefully
try
to
help
reach
this
goal,
that
we
have
for
being
done
by
yukon
answer,
dams
and
so
will-
and
we
definitely
have
time
to
invest
to
after
this,
to
answer
more
questions
to
address
any
concerns.
Travis
and
I
are
very
available
and
willing
to
engage
and
continue
driving
this.
B
A
You
absolutely,
and
thanks
again
to
the
rook
team,
you
guys
have
done
a
really
great
job
of
of
you
know,
crossing
all
the
T's
and
dotting
all
the
I's
and
making
it
very
easy
for
the
cool
I'm
gonna
have
to
drop
off
in
a
few
minutes,
I'm
actually
gonna
hand
over
to
Saad.
Now
Sardar
you
in
a
good
position
to
give
us
a
quick
update
on
the
work
you
did
on
Harbor
I.
If.
F
F
So
yeah
I
was
tasked
with
taking
a
look
at
Harbor.
Harbor
is
a
container
registry.
They
are
able
to
run
on
various
different
platforms.
Unlike
a
lot
of
existing
container
registries,
they
can
be
deployed
on
existing
cloud
providers.
They
can
be
deployed
on
Prem,
and
so
the
ask
was
for
sig
storage,
the
ncf
six
orders
to
take
a
look
at
this
from
a
storage
perspective,
they're
looking
to
graduate
and
see
if
they
had
any
concerns.
F
So
I
took
a
look
at
it
and
they
have
two
storage
dependencies.
One
is
application
data
and
second,
is
their
images
in
charts
data
for
their
application
data.
They
use
storage
storage
classes
in
PVC
when
deployed
on
kubernetes,
which
is
great
means
that
it's
extensible
and
can
be,
you
know,
can
leverage
whatever
storage
the
cluster
administrator
has
set
up.
They
also
support
object,
storage
as
an
optional
thing,
instead
of
storage
classes
or
PVCs,
and
they
provide
a
number
of
different
object.
Storage
backends
as
your
GCS,
that's
their
Yvette.
F
That
users
can
use
instead
and
they're
not
required
to
use
object,
storage,
it's
an
option
in
addition
to
storage
classes
in
PVC.
So
no
concerns
there
for
image
and
image,
charts
and
data.
They
depend
on
a
Postgres,
sequel
and
Redis
cluster
to
exist
on
the
on
the
cluster
somewhere,
so
I
actually
I
believe
Michael
Michael
clarified
that
their
deployment
will
actually
create
create
the
database
and
the
Redis
cluster
if
one
does
not
already
exist.
But
if
a
customer
wants
it
to
be
a
che,
they
need
to
go
and
deploy
it
themselves.
F
F
F
I
A
Share
your
concerns
about
the
default
deployment
being
non
highly
available,
particularly
for
a
container
store
registry-
that's
kind
of
problematic
for
a
graduated
project
for
incubation.
You
know
that
would
be
totally
fine,
but
for
graduation
I
think
we
just
need
to
call
about
very
clearly
I'm
suspect
that
that
may
be
a
blocker
for
graduation.
From
the
TRC's
point
of
view,.
A
A
One
counter-argument
that
might
come
up
is
for
a
very
long
time.
Kubernetes
was
actually
not
highly
available
by
default
either.
Yes
and
I
don't
know
to
what
extent
that
is
solved
today.
If
the
default
deployment
is
actually
a
highly
available
HCD
cluster
versus
back
my
signal
volumes,
I,
don't
know
if
you
know
what
the
answer
to
that
is:
I.
A
C
A
C
A
A
And
any
other
questions
or
concerns
regarding
harbor,
if
not
I'm,
happy
to
leave
this
desaad
to
communicate
with
the
TRC,
make
it
clear
what
this
restriction
is
and
make
the
decision
there.
I
definitely
would
like
that
to
be
resolved
in
the
near
future.
You
know
whether
or
not
it
graduates
now
whether
we
delay
graduation
until
that
has
been
resolved.
Yeah.
A
F
A
A
Right,
we
will
get
the
decisions
that
were
made
today
document.
Please
get
your
questions
regarding
rook
in
in
the
next
week
before
next
Wednesday,
and
then
we
will
put
that
up
for
vote
they're,
giving
three
weeks
before
Q
come
and
similar
Harbor
I
think
we
can
consider
the
due
diligence
essentially
completed
and
some
of
the
issues
have
been
raised.
So
if
you
have
any
others
to
raise,
please
do
those
in
the
next
few
days,
because
that
will
probably
also
go
up
for
vote
before
Q
come
thanks.
Everyone
I.