►
From YouTube: CNCF SIG Storage 2020-03-11
Description
CNCF SIG Storage 2020-03-11
A
A
A
A
A
B
B
B
B
B
B
B
B
B
B
B
B
B
B
A
A
Alright,
so
we
have,
we
have
only
a
few
items
on
the
agenda
today,
but
I'm
hoping
we
can.
We
can
get
through
it
quite
quickly,
but
this
there's
a
bunch
of
stuff
we
we
can.
We
can
formalize
if
we
get
this
done
so
I
I
just
wanted
to
first
of
all
double
check
that
we
don't
have
any
follow-ups
on
the
harbor
and
rook
projects,
reviews
I,
don't
believe
we
do
and
I
believe.
We've
submitted
all
the
all
the
information
just
double-checking
with
you
raining.
If,
if
there,
if
there's
anything
outstanding
as
far
as
you're
aware.
B
D
D
A
D
Yeah
and
I
think
we
need
to
be
clear
about
like
what
is
exactly
the
concern,
because
Harbor
does
support
a
che.
It's
just
does
so
by
saying
hey.
If
you
want
it,
you
need
to
go
and
set
it
up
yourself,
and
so,
if
the
deployment
aspect
of
it
is
critical,
but
we
need
to
call
that
out.
I
think
the
TF
C's
position
was
that
it's
kind
of
unreasonable
to
ask
Harbor
to
go
and
ensure
that
all
of
its
dependencies
are
are
deployed
in
a
specific
way
through
its
own
default
deployer.
C
Right,
yeah
I
have
a
personal
opinion
on
this
and
I
think
I
think
that's
a
kind
of
a
reasonable
approach.
What
you
just
mentioned
sod.
So
we
do
repeatedly
see
these
architectures,
which
are
based
on
essentially
a
single
point
of
failure,
relational
database,
be
it
my
sequel
or
Postgres
or
whatever
and
to
some
extent,
like
the
whole
cloud
native
movement,
is
specifically
to
address
that
problem
and.
C
A
I
think
that
I
think
that
criticism
is
fair.
If,
if
you
have
something
which
is
designed,
you.
A
A
That
should
be
taken
into
consideration.
You
know
I
mean
specifically.
If,
for
example,
Harbor
has
dependency
on
a
database,
but
it's
default.
Deployer
doesn't
deploy
the
database
in
a
chain,
but
there
isn't.
There
is
a
set
of
exercises
or
maybe
there's
a
helm
chart
or
there's
a
there's,
there's
something
you
can
use
that
can
allow
you
to
deploy
it
in
a
chain.
A
A
How
does
harbor
cope
with
a
failover?
Does
it
mean
to
be
repointed,
so
you
have
to
have
some
sort
of
load
balancer
across
a
multi
master
set
up
of
databases
or
something
you
know,
I
mean
there
are
so
many
there
are
so
many
different
options
and
I
think
if,
if
that's
not
made
clear
in
the
project,
it's
incredibly
easy
for
an
end
user
to
get
this
wrong.
D
C
I
mean
over
and
above
that
and
please
fit
I
haven't
set
up
like
h8
databases
for
many
many
years,
so
please
somebody
feel
free
to
climb
in
and
correct
me,
but
my
understanding
is
that
it's
essentially,
you
know
well
nigh
impossible
and
that's
why
we
have
projects
like
the
tests.
For
example,
that
I
mean
it's
a
whole
project
designed
to
make
my
sequel
like
highly
available
and
scalable,
and
my
understanding
is
that
there
isn't
actually
a
simpler
way
of
doing
it
and
so
any
notion
that
one
can
actually
create.
C
You
know
set
up
my
sequel
to
be
highly
available
and
scalable.
It's
not
true.
I
mean
correct
me.
If
I'm
wrong,
you
you,
they
have
to
have
manual
master/slave
failover
or
something
like
that,
because
there
isn't
it.
There
isn't
a
way
to
have
seamless
automated
failure,
which
is
the
essence
of
native
computing.
F
The
only
the
only
alternative
is
to
use
a
mounted
storage
that
is
durable
and
take
the
hit
on
the
edge
a
if
the
part
goes
down.
It
comes
back
up
and
performs
recovery
from
another
when
it
comes
yeah.
When
it
comes
back
up,
it
performs
recovery
and
continues
this
possibility
of
a
little
bit
of
a
data
loss.
You
may
lose
your
last
transaction
or
something
depending
on
how
you
set
it
up,
but
it's
generally
true,
but.
D
Yeah
I
think
we
should
differentiate
between
kind
of
different
levels
of
h.a
right.
So
if
we're
talking
about
a
single
site
and
we're
talking
about,
you
know
synchronous,
replication
across
multiple
instances
within
that
site,
I
think
that's
what
we're
talking
about
for
for
harbor
versus
kind
of
geographic
H,
a
multi
site
H
a
which
is
like
Quintin,
mentioned
much
more
of
a
challenge,
but.
C
Just
to
be
clear,
I'm
not
talking
about
multi-site
at
all,
I'm,
just
talking
about
a
datastore
which
needs
to
be
transactional.
So
if
you,
you
know,
upgrade
your
containers
in
the
repository,
you
need
to
know
that
it's
actually
there
and
because
the
entire
cluster
depends
on
on
that
version
being
there
and
and
if
the
repository
becomes
unavailable
or
gets
corrupted,
your
entire
cluster
is
is
potentially
failed.
C
I
don't
mean
to
sorehead
ever
say
seriousness.
Is
that,
but
in
reality
you
know
you
have
this,
you
have
a
you
know.
Perhaps
a
global
outage
going
on.
You
fix
the
value
you
send
it
to
your
repository
in
your
repository
loses
that
transaction,
because
you
know
it's
asynchronous
replication,
whatever
you
now,
you
know
have
this
ongoing
global
outage,
or
rather.
C
You
know
custom
ID
outage,
which
is
kind
of
you
know,
and
that's
sort
of
inevitable.
It's
it's
going
to
happen
and
that's
what
cloud
computing
and
cloud
made
of
storage
is
designed
to
solve.
Is
that
you
don't
have
those
so
so
yeah?
No,
don't
please
don't
confuse
it
with
multi-site
replication
or
anything
else,
I'm.
Just
talking
about
the
very
basic
use
case
and
making
sure
that
that
the
repository
is
available
when
you
need
any
transaction
yourself.
D
C
Kubernetes
I
mean
EDD
is
fundamentally
transactionally
sound.
You
have
three
nodes,
they
agree
on
who
the
master
is.
They
agree
on
all
the
transactions
that
are
committed
and
if
any
one
of
them
fails,
in
fact,
some
case
you
have
five,
but
let's
just
use
three
any
one
of
those
nodes
can
go
down.
You
don't
lose
any
transactions.
Everything
is
perfectly
sound
and
nothing
becomes
unavailable.
But
that's
that's.
Essentially.
What
I'm
talking
about
is
there's
a
fundamental
difference
between
having
a
it's
a
D
type
system
underlying
your
storage
and
having
a
relational
single
point
of
failure.
F
By
the
way,
even
people
don't
trust
people
that
go
into
production
with
the
test,
don't
even
trust
a
CD.
They
actually
some
of
them,
actually
we're
asking
about
what
if
we
lose
all
the
edge
City
data,
so
one
design
principle
of
a
test
is
that
if
the
actually
data
is
lost,
it
can
be
manually,
reconstructed.
F
C
I
mean,
irrespective
of
what
people
believe
I
mean
if
you
burned
down
an
entire
data
center
and
all
the
hard
drives
in
it
by
definition,
all
that
is
gone
and
there's
nothing
you
could
do
unless
you've,
you
know
synchronously
replicated
it
somewhere
else.
So
I,
don't
think.
The
expectation
is
to
be
unrealistic
like
that,
but
to
be
very
clear
about
what
failures
you
can
tolerate
and
what
failures
you
can't
tolerate.
A
burning
down
entire
data
center
is
not
a
failure
that
anybody
claims
to
be
able
to
tolerate
without
any
outage.
A
Guess,
I
guess
what
we're
debating
here
is
the
implementation
of
those
back-end
dependencies.
So
the
way,
the
way
you
instantiate
them
the
way
you
configure
those
dependencies
can
have
a
huge
aunt
come
on
on
things
like
availability,
because
you
know
things
like
things
like
I
see
the
euro
and
they
lend
themselves
so
that
sort
of
technology
also
have
kind
of
you
know
some
weird
things
depending
on
how
you
deploy
them.
So,
for
example,
you
know
the
Etsy
the
default
Etsy.
A
You
know,
then,
theoretically,
you
can
probably
achieve
strong
consistency
on
on
those
sort
of
databases
too,
but
it
it
is
very,
very
dependent
on
the
deployment
mechanism,
and
this
is
why
I
was
kind
of
specifically
I
was
kind
of
curious
and
asking
sad.
How
well
is
it
defined
on
how
you
deploy
those
things
in
AJ,
because
almost
certainly
a
default
thing
in
a
helm,
repo
won't
do
the
right
thing
for
these
sorts
of
requirements.
A
D
You
know,
set
that
up
yourself
and
manually
deploy,
deploy
those
dependencies
and
that's
kind
of
left
up.
You
know
as
an
exercise
for
the
user
and
speaking
with
the
harbor
maintainer
x'.
They
said
that
the
default
Postgres
and
Redis
helm
charts
do
enable
an
H,
a
deployment
so
to
Quentin's
question
I'm
I,
don't
know
how
well
that
works
out
of
the
box,
but
it
is
supposedly
supported
so.
C
To
the
best
of
my
ability
to
the
best
of
my
knowledge
and
please
anybody
feel
free
to
correct
me,
it
is
not
actually
possible
to
deploy
Postgres
in
an
H,
a
and
scalable
fashion,
or
even
AJ,
to
the
point
that
that's
who
we
mentioned
earlier,
so
so
two
wave
one's
hands
and
say
well,
if
you
wanted
AJ,
then
do
it.
Yourself
is
only
fine
if
that's
possible.
C
To
be
an
example
of
something
that
relies
on
a
single
point
of
failure,
relational
database
that
has
not
been
demonstrated
to
be
deployable
in
an
H
a
fashion
and
on
which
the
entire
cluster
depends,
and
there
is
a
whole
class
of
these
applications
and
we've
seen
many
of
them
being
submitted
to
the
CN.
Cf
and
I
do
think
you
know.
D
C
But
if,
if
it's
is
any
I,
read
all
this
sound
previously
in
the
issue
and
all
due
diligence
I'll,
look
it
up
if
not
I'm,
happy
to
add
to
it.
If
it's
not
clear
but
I
think
I
already
made
that
clear.
If
you
look
at
the
notes
from,
was
it
two
or
four
weeks
ago,
I,
don't
remember
the
notes
for
this
meeting.
I
put
it
in
words
there:
okay,
as
long
as
it's
writing
somewhere
it'll
be
easy
to
point
it,
and-
and
this
is
from
memory.
C
F
F
F
Was
just
asking
I
mean
I
agree
with
the
Quinton's
point.
It
is
actually
true.
So
the
only
thing
I
was
worried
about
is
I
mean
the
reason
why
I
wanted
I
I
thought
it
was.
Okay
was
because
metadata
not
being
highly
available
by
some
people
is
treated
as
acceptable,
but
if
it
is
not
leaked
Instructables,
then
it's
a
huge
problem
and
I
don't
know
if
that
is
a
property
of
art
but
like
if
you
lost
all
your
metadata,
can
you
reconstruct
it
annually?
F
We
could
let
them
off
the
hook
if
that
was
the
case,
but
otherwise.
Yes,
it
is
a
definitely
a
concern,
and
and
in
principle
everything
that
Quinton
said
is
100%
true,
but
the
bigger
question
for
me
was:
we
thought
we
said.
Okay,
that
is
we
find
that
acceptable,
but
or
if
you
have
not
said
so,
I
think
it's
fine.
We
can.
C
What
I,
what
I
suggested
we
do
and
I
think
what
Saad
did
was
we
highlight
the
fact
that
this
thing
is
not
available
and
we
defer
to
the
TRC
to
decide
whether
they
would
like
to
delay,
so
the
project
has
already
stated
that
they
plan
to
support
more
highly
available,
backends.
Oh,
what's
that
yeah
I
got
it
yeah!
No
I!
C
Understand
yes,
and
so
the
question
was
just
due
to
the
TRC
want
to
delay
graduation
until
that
work
is
finished
or
do
they
want
to
graduate
it
now,
garlic
and-
and
so
I
I
agree
with
you.
If
it
were
my
decision,
I
would
delay
graduation
until
that
works,
finished,
they've
decided
not
to,
and
that's
that's
the
prerogative
of
the
TOC
and
that's
reasonable.
My
question
is
actually
or
my
concern
is
actually
much
more
general
than
than
Harbor
and
I'm
not
having
a
go
at
Harbor,
particularly
I'm.
C
Just
wanting
us
to
come
to
a
clear
conclusion
on
the
point
of
this
entire
class
of
things
that
are
based
on
single
point
of
failure,
relational
databases
upon
which
an
entire
cluster
depends.
We
need
to
make
a
blanket
decision
as
to
whether
those
projects
can
graduate
before
they're
from
highly
man
who
beckons
or
not
and-
and
my
point
face
strongly-
is
that
I
do
not
think
we
should
I,
don't
think
it
creates
the
life
precedent
for
the
CN
CF.
C
A
E
A
Instead,
based
on
based
on
these
issues,
where
we're
not
you
know,
we're
not
okay,
to
recommend
graduation
as
a
sig
and
I
would
like
further
clarity
on
this
and
and
I
just
like
to
scoping
it
into
the
chat
window.
Just
to
make
sure
everybody
had
seen
that
email,
so
so
I
I
think
it's
perfectly
fine
for
us
to
to
raise
this.
The
next
year
see
anything
all
I
I
lied
it's
to
the
sword,
sick
agenda
or
update
for
the
next
time.
Thank.
A
G
G
C
Think
the
way
things
are
moving
at
the
moment
to
seek
runtime
is
actually
supposed
to
coordinate
all
the
responses
from
all
of
the
sinks
which
it
hasn't
done
yet
and
also
there's
a
document
that
that
outlines.
So
each
SIG's
should
should
kind
of
describe
what
they
looked
at
and
give
a
summary
of
their
findings,
and
that
hasn't
been
done
yet
and
that's
what
the
TRC
noted-
and
that
seems
like
what
is
happening
now
and
sig
runtime
is
responsible
for
doing
that.
I'm.
B
A
Alright,
so
moving
on
to
the
next
agenda
item,
so
as
discussed
last
time,
I
have
created
a
copy
of
the
of
our
storage
landscape
white
paper,
giving
it
to
v2
title
and
I've
copied
in
the
database
section
that
sugru
had
worked
on
and
that
we
had
reviewed
and
I
copy.
Then
the
the
updated
management
and
CSI
section
that
that
Jing
had
put
together
and
that
we
had
reviewed
the
degrees.
A
B
A
A
Of
course,
the
document
the
document
is
the
document
link
is
is
included
in
the
minutes.
It
would
be
really
great
if
people
could
just
scan
through
it
and
just
make
sure
we
you
know,
I
haven't
messed
anything
up
or
made
any
mistakes.
I
obviously
feel
free
to
to
comment
on
anything
that
you
need,
I
think
might
need
an
update
or
whatever.
C
Awesome
thanks
for
your
hard
work
on
that
Alex.
Just
one
parting
comment:
we
had,
we
had
a
bunch
of
things
that
we
sort
of
were
targeting
for
Cuba
on
Europe,
which
is
obviously
now
postponed.
I
would
like
us
to
like
to
encourage
us
to
just
stick
with
our
plans.
Even
though
the
actual
cube
con
has
been
postponed.
Let's,
let's
try
and
get
all
of
those
to-do
items
done.
C
You
know
by
queue,
cons
original
date,
which
I
think
we
can
do,
and
we've
got
most
of
the
work
done
rather
than
let
it
slide,
because
I'm
pretty
sure
that
we're
gonna
have
another
deluge
of
things
that
need
to
be
done
by
the
new
Cuba
on
dates.
They're,
going
to
be
a
bunch
of
projects
that
arrive
and
want
to,
you
know,
join
the
sandbox
and
be
graduated
and
etc
by
the
new
date
whenever
that
turns
out
to
be,
and
so
let's,
let's
not
let
the
existing
stuff
slide
beyond
that
bit.
C
A
Yeah
yeah
note
that
that
makes
complete
sense.
So
just
to
recap,
the
the
three
things
that
we
had
wanted
to
do
was
this:
the
veto
of
the
storage
I
escaped
the
abuse
case,
template
and
the
performance
talk.
The
performance
talk
has
slowed
a
little
bit
and
we
have
people
working
on
it.
A
bit
yeah
we
need
to.
We
need
to
speed
that
up
a
little
bit
and
the
the
use
case.
Template
was
going
to
be
the
next
thing
on
the
agenda.
A
A
A
Initially,
we
we
had
kind
of
discussed
having
specific
use
cases
after
much
debate.
We
and
you
know
especially
around
CNC,
Africa
and
king-making,
and
that
kind
of
thing
we
we
settled
on
having
use
case
categories
instead
of
instead
of
you,
know,
specific
use
cases
and
the,
and
we
had
we've.
We've
had
a
few
discussions
about
of
those
categories
should
be.
We
start.
We
we
put
together
the
first
five
categories
that
we
think
needed
to
be
tackled,
which
were
databases,
object,
stores,
message,
queues,
instrumentation
so
on
their
instrumentation
I'm.
A
Thinking
of
you
know
things
like
primitives,
for
example,
and
kV
stores,
and
the
idea
would
be
that
we
would
have
a
use
case
documents
for
the
category
in
github
and
that
use
case
document
might
then
have
one
or
more
options
to
describe.
You
know
more
specific
examples
of
that
of
that
category,
so
so,
for
example,
the
the
use
case
for
databases
might
have
two
or
three
examples
to
discuss.
You
know
a
single
instance
database
a
replicated
database
ashore
in
a
database
that
kind
of
thing.
A
So
if
we,
if
we
look
at
the
use
case,
template
we've
kind
of
we've
taken
the
templates
that
that
Louis
had
started
working
on
had
circulated
within
within
that
working
group
and
I've,
taken
I've
put
it
into
a
Google
document
and
added
a
few
additional
sections.
So
we
we
kind
of
start
off
with
some
simple
goals
and
non
goals
which,
which
is
is
more
to
kind
of
describe.
What's
in
scope
and
what's
out
of
scope,
I'll
make
that
clear
to
the
to
the
reader.
A
We
then
have
storage
attributes
section,
and
this
is
based
on
the
attributes
from
the
landscape
white
paper.
So
so,
and
you
know
when
we
talk
about
availability
or
scalability
or
performance
or
whatever
else,
what
those
different
use
cases
are
dependent
on,
you
know:
do
we
even
have
a
section
on
consistency
which
would
you
know
be
a
great
discussion
points
for
you
know,
following
on
from
the
hey
tray
discussion
we
just
had
now,
etc
and
then
finally
durability
as
well.
A
C
A
Okay,
is
there
anybody
on
the
call
that
particularly
wants
to
work
on
this
I'm
happy
to
to
to
work
on
entertainment
and
obviously
Louis
under
his
on?
The
call
will
will
be
helping
to
drive
this
as
well.
I.
A
A
So
that's
fine.
What
I'll
do
is
I'll
I'll
set
up
a
time.
I'll
set
up
a
time,
with
wit,
Lurie's
and
we'll
work
through
the
example.
I'll
send
an
email
to
and
and
a
slack
message
on
to
the
to
the
whole
group
and
anybody
who
wants
to
join
can
join.
We
can
help
bitrates
through
one
example
and
then
we'll
share
it
out
at
the
and
the
next
storage
Sigma
T
I.
G
A
That's,
that's
that's
true,
so
Lewis's
Lewis's
PR,
we
had
discussed
it
and
we
just
need
to
refine
it
because-
and
this
was
kind
of
like
the
output
of
it,
so
I've
already
met
with
recent,
and
we
kind
of
discussed
this,
because
when
we
only
discussed
the
PR,
we
kind
of
got
a
lot
of
feedback
around.
You
know
we
didn't
want
it
to
be
specific
to
many,
and
we
didn't
want
to
be
specific
to
a
particular
project.
We
wanted
it
to
be
to
be
more
more
suited
to
a
category.
C
I
think
he's
particularly
good
at
these
and
all
could
find
somebody,
for
example,
who
knows
about
KB
stores,
to
write
a
fairly
authoritative
guide.
It
would
be
great
if
we
could
pass
it
by
some
of
the
other
KB
stores.
Ti
KB
is
one
obvious
example
and
maybe
console
and
make
sure
that
the
thing
represents.
You
know
a
generic
set
of
recommendations
for
deploying
KB
stores
on
cloud
native
stuff,
cool.
A
E
E
It
uses
like
bookkeeper
and
HDFS
for,
like
tier
1
and
tier
2
or
or
when
used
in
conjunction
with
Dells
products.
It
would
use
ECS
or
Isilon
as
as
tier
2
storage,
or
you
know
we
have
s3,
connectors
and
so
on,
but
we
we
have
we're
built
on
top
of
zookeeper
and
bookkeeper
among
other
things,
and
we
have.
E
E
A
A
If
you
want
to
suggest
the
project
for
a
sandbox
I
think
you
know
if,
if
he
wants
to,
if
you
want
to
share
some
some
links
or
whatever
to
the
project
we
can,
we
can
certainly
circulate
I
mean.
Certainly
a
first
point
would
might
be
to
circulate
some
links
to
the
project
to
the
mailing
list
and
see.
If
you
know
there
are
any
questions,
but
if
you
want
to
go
ahead,
I
know
you're
trying
to
decide
between
CN
CF
ins
and
the
the
AI
LF,
etc.
C
Yeah
she
suggests,
irrespective
of
whether
you
choose
the
CNC
F
or
not,
and
what
the
TRC
decides
I
think
be
great,
to
have
a
presentation
anyway.
Just
for
the
general
awareness
of
the
sig
of
what
you
guys
are
doing,
and
then
you
know
that
would
naturally,
if
things
went
that
way,
it
would
naturally
eat
into
us
being
able
to
recommend
otherwise
to
the
TRC,
so
I
think
yeah,
either
way
it'll
be
great.
E
C
E
A
Right
so
so
why
don't
we
do
why
don't
we
do
is
Quentin
stressor?
Let's,
let's
get
a
presentation
on
the
agenda
for
for
the
next
meeting
in
two
weeks.