►
From YouTube: CNCF SIG Storage Meeting 2020-08-12
Description
CNCF SIG Storage Meeting 2020-08-12
A
B
B
Everyone
we're
just
waiting
for
a
few
more
people
to
join.
Thank
you.
B
Hi
quinton
I'll
just
wait
for
maybe
a
minute
or
two
and.
E
F
Yeah
also
from
the
non-backgrounds
I
do
actually
have
backgrounds
for
kubecon
that
are
coming
out.
I
sent
things
after
the
ambassadors
yesterday.
There
is
a
link
over
in
chat
to
the
github
that
I
have
tossed
the
virtual
backgrounds
into.
So
you
can
be
all
very
festive
for
next
week.
B
C
Backgrounds
don't
work
for
me
because
my
hair
color
matches
the
back
of
the
screen.
I've
tried
everything.
B
All
right,
I
think
I
think
we
can.
I
think
we
can
we're
good
to
start
so
so.
First
off
we
have
s.
I
hope
I'm
pronouncing
your
name
right.
Who's
just
asked
to
have
a
few
minutes
to
give
a
quick
update
on
some
new
community
meetups
that
he's
helping
organize
with
a
focus
on
on
storage
on
kubernetes.
B
A
Thank
you.
I
appreciate
that
alex
yeah.
So
basically
it's
it's.
As
alex
mentioned,
I'm
I'm
running
meetups
that
are
and
forming
a
bit
of
a
community
around
data
on
kubernetes
and
trying
to
just
have
it
be
a
place
where
people
can
share
their
experiences
and
learn
from
each
other,
and
so
I
I'm
going
to
share
my
screen
real
quick.
I
mean
I
have
four
slides,
so
it's
not
going
to
be
death
by
powerpoint
by
any
means,
but
yeah.
I
guess
I
I
got
intro
from
alex
and
I'm
demetrius.
A
I've
been
primarily
playing
in
the
mlop
space,
but
kubernetes
was
something
that
was
very
very
on
the
top
of
everyone's
mind,
especially
when
you
talk
about
kubeflow
and-
and
it
was
like
hey,
there
needs
to
be
something
where
we
can.
We
can
all
come
together
and
talk
about
data
when
it
comes
to
kubernetes
and
potentially
evolve
mature
the
space
a
little
bit.
A
So
what
we
decided
on
was
having
this
this
meetups
on
one
hand
and
then
the
slack
like
workspace
on
the
other
hand,
and
from
there
you
know,
I
wanted
to
make
it
just
as
open
and
as
in
inclusive
as
possible,
and
I
wanted
to
get
people
that
are
doing
you
know
are
talking
about
war
stories
of
whatever
it
is
that
has
to
do
with
data,
doing
data
on
kubernetes
or
talking
about
best
practices
talking
about
operators
talking
about
databases,
whatever
the
you
think,
is
interesting
in
this
field.
A
A
So
it's
a
light
lift
or
it
can
be
more
of
like
a
talk
where
you
share,
slides
and
all
that.
So
it's
it's
up
to
you
really
but
yeah.
Like
I
mentioned,
we
have
this
slack
workspace
and
I
can
share
these
links
in
the
chat
right
now
in
case
anyone
wants
to
join
and
then
give
their
their
two
cents
and
then
the
last
thing
is
like.
I
would
really
love
to
see
it
become
something
where
people
can
bring
their
own
initiatives
start
their
own.
A
Excuse
me
start
their
own
series
around
whatever
it
is
that
you
think
is
is
interesting.
Maybe
you
know
you
say
well
hey.
I
really
want
to
talk
about
security,
or
I
want
to
talk
about
something
that
is
is
top
of
mind
for
you
and
we
can
have
a
series
or
an
initiative
around
that.
So
that's
it
that's
my
my
pitch
as
we
could
say,
and
I
hope
to
see
see
all
you
there.
G
I
had
a
question
if
I
may
take
a
second:
is
the
purpose
to
to
capture
user
stories
or
to
build
apis
around
data.
I
know
there's
a
data
protection
work
group
in
kubernetes
that
you
may
want.
I
don't
know
if
you
attend
that,
but
that
would
possibly
have
some
crossover.
A
Oh,
that's
that's
great
to
know
for
sure
I
will
I
will
reach
out
to
them
also,
but
yeah.
It's
not
really
so.
The
interesting
thing
about
this
is
it's
not
really
working
towards
anything
specific
like
it's
not
like
we're
getting
together
and
we
are
creating
projects
around
working
like
working
groups,
we're
not
doing
any
open
source
or
anything
it's
more.
Just
like
you
were
mentioning
like
yeah
user
stories
or
best
practices,
things
that
you
like.
A
I
have
next
week
we're
having
a
guy
on
and
he's
talking
about
how
he
just
banged
his
head
against
the
wall
for
two
months
straight
trying
to
set
up
a
certain
pipeline
or
or
his
stack
and
and
he
wants
to
tell
people
the
learnings
from
that
whole
thing,
and
so
that,
hopefully
they
don't
repeat
the
same
thing
that
he
went
through
and
don't
have
to
go
through
that
same
pain.
D
I
had
I
had
a
quick
first
of
all,
I
think
this
is
a
fantastic
idea.
I
think
there's
a
huge
amount
of
wealth
of
of
kind
of
untapped
information
in
this
space
people,
as
you
say,
struggling
with
you
know,
these
are
difficult
problems.
D
I
think
data
on
cloud
computing
in
general
is
hard
and,
and
data
on
kubernetes
is
arguably
even
harder
and
and,
as
you
say,
people
have
solved
these
problems,
but
but
the
information
is
not,
you
know
readily
available
out
there,
so
we
we
actually
had
a
similar
initiative
here
to
kind
of
canonicalize
some
of
these
experiences
so
that
people
could
reuse
them
and
I
think
luis
is
on
the
call
he
was
kind
of
spearheading
this.
D
We
were
trying
to
create
a
catalog
essentially
of
of
common
use
cases
that
we
could
then
document
and
say
this
is
a
good
way
of
doing
this
thing
and
if
you're
doing
something
like
this,
like
here's,
a
kind
of
a
recipe
that
we
know
works,
and
I
don't
want
to
paraphras
or
put
words
in
your
mouth
louise.
Maybe
you
want
to
talk
about
that,
but
it
might
be
interesting
to
explore
the
overlap
between
these
between
your
meetups
and
this
initiative,
because
it
seems
like
there's
a
lot
of
common
kind
of
goals
there.
D
E
Absolutely
I
I
really
like
it
and
I
look
forward
to
attending
and
probably
even
participating.
E
I
think
this
is
great
and
one
of
the
things
just
like
quentin
said
we
were
looking
to
do
this
model
where
we
don't
only
talk
about
the
you
know,
the
reference
designs
of
of
storage,
which
we
we
have
documents
about,
but
also
how
customers
view
storage
and
data
and
their
applications,
and
you
know
they
get
consumed
from,
and
you
know
it's
one
way
to
be
an
expert
in
how
data
moves
right
and
applications,
but
another
one
is
how
how
does
it
get
consumed
and
how
it
gets
used
in
kubernetes
and
those
just
many
questions
are
still
unanswered.
A
Yes,
yes,
that's
that's
exactly
right,
like
that
whole
idea
of
hey,
let's
get
together.
Let's
share
this
knowledge
because
it's
untapped
and
we
need
to
be
talking
about
it
or
writing
it.
I
mean
I
did
have
the
idea
of
after
the
meetups
taking
what
I'm
doing
is
I'm
taking
some
of
these
gems
that
people
talk
about
and
I'm
writing
them
down
and
putting
them
onto
medium
and
so
I'll
connect
with
you
offline,
at
least
to
see
about.
A
E
Yeah
exactly
right,
I
also
can
tap
us.
You
know
peoples
in
the
community
and
also
participate
too
that'll
be
great.
So.
A
Yeah,
I
would
love
that
so
I'll
I'll
connect
with
you
offline,
because
I'm
definitely
looking
for
speakers.
I'm
looking
for
people
to
share
their
wisdom
and
share
share
their
knowledge.
A
F
A
D
One
other
quick
question,
so
it
seems
like
there's
a
lot
of
overlap
between
what
that
group
does
and
what
this
group
does,
and
I
don't
view
that
as
a
bad
thing
at
all.
I
think
you
know.
D
Historically,
this
group
is
focused
a
lot
on
the
actual
infrastructure
at
the
the
you
know,
layers
of
the
storage
stack
and
and
all
the
various
open
source
projects,
and
we
tried
to
get
this
sort
of
use,
cases,
analysis
and
publication
going,
and
I
think
it's
it's
sort
of
been
tricky
I'll
put
it
that
way,
but
I
I
would
love
to
see
an
ongoing
collaboration
between
the
meetups
and
and
the
sig.
You
know.
D
One
model
that
comes
to
mind
is
maybe
you
kind
of
give
come
and
give
us
a
10,
15
minutes
sort
of
summary
per
month
or
something
like
that.
It
says:
here's,
the
really
cool
stuff
we
did
this
month.
Here's
you
know
stuff.
You
should
probably
go
and
check
out
here's
a
podcast,
here's
this
that's
the
next
thing.
I
think
that
would
be
super
useful
to
this
group
and
then
I
don't
know
how
you
know.
Maybe
the
reverse
might
also
make
sense.
D
Maybe
somebody
from
the
sig
does
a
you
know,
whatever
courtly
presentation
to
your
meetups,
just
to
kind
of
keep
the
two
groups
in
sync,
it
doesn't
sound
like
they
need
to
merge
into
one,
but
but
they
do
probably
need
to
kind
of
know
what
each
other's
doing
on
a
regular
cage.
A
B
All
right
thanks
thanks
to
demetrius
and
I'm
sure
we'll
we'll
have
lots
of
interaction.
Both
ways
coming
up.
B
So
so
everyone-
I
I
just
wanted
to
to
do
a
a
a
quick
sync
up
on
where
we
are
on
the
the
performance
white
paper
that
that
we've
been
working
on.
So
I
wanted
to
I'll
just
share
my
screen
just
so
that
we
can
go
through
it
briefly.
Can
you
see
this?
Okay,
yes,
cool
all
right,
so
so
what
I've
done
is
I've.
I've
simplified
some
of
the
parts
because
we
we
have.
B
B
Make
perfect
the
enemy
of
good
so
to
speak
right
because
I
didn't
want
to
to
continually
delay
the
document
until
we
we
have
perfection
so
I've
I've
simplified
the
document
and
I've
made
the
bits
which
are
which
are
currently
complete
a
bit
more
scoped
out,
and
I
think
we
may
be
ready
to
to
put
this
out
for
for
review.
B
So
so
I'm
just
going
to
quickly
scroll
through
this
to
to
see
if
we're
okay
with
this
before
we
before
we
do
a
release.
B
So
the
the
basis
of
the
document
is
is
to
is
to
provide
some
background
on
how
end
users
can
understand
the
performance
and
and
and
potentially
do
benchmarking,
but
also
you
know,
highlights
the
pitfalls
and
some
of
the
challenges
with
with
doing
this,
so
that
they
can
understand
more
clearly
how
to
do
apples
for
apples,
comparisons.
B
So
I've
started
off
with
a
simple
introduction
that
basically
says
always
test
your
own
application
in
your
own
environment.
Don't
rely
on
published
results
from
vendors,
and
also
I
put
in
a
link
to
the
white
paper
so
that
people
can
understand
you
know
the
the
the
general
attributes
and
terminologies
of
of
the
of
the
storage
environment
before
they
before
they
start
reading.
B
Some
of
these
I
talk
about
the
two
different
classes,
so
so
volumes
and
and
databases,
and
we
have
quite
a
quite
a
detailed
description
of
what,
what
what
forms
say,
a
database
workload
and
the
volume
workloads
which
which
covers
you
know
some
of
the
general
principles
and
some
of
the
things
to
look
at.
B
We
have
a
particularly
well-written
section
that
covers
the
common
pitfalls
and
considerations
so,
for
example,
what
tools
to
use
to
to
measure
what
what
metrics
and,
and
what
type
of
you
know
basic
terminologies
to
to
to
be
aware
of
things
like
you
know,
when
you're
looking
at
latency,
don't
just
look
at
the
the
measurements
at
a
particular
point
in
time,
but
look
at
the
measurement
over
time
so
that
you
can
kind
of
sort
of
see
the
different
percentile
numbers,
the
impact
of
concurrency
to
performance,
the
the
the
impact
of
caching
and
and,
for
example,
compression
compression
things
that
might
affect
the
performance
testing
when
it
comes
to.
B
You
know
the
the
the
environment
that
you're
testing
in
you
know,
aspects
that
affect
the
environment
in
physical
infrastructure,
as
well
as
in
in
cloud
environments.
B
Things
like
the
effects
of
of
random
rights
and
right
amplification,
the
effects
of
encryption
on
benchmarking,
the
effects
of
the
topology
and
notes
to
talk
about
the
the
challenges
of
the
on
the
client
side
of
testing
the
performance.
Because,
because
you
know
very
often,
the
clients
can
be
can
be
the
the
bottleneck
in
in
these
environments.
B
Things
like
you
know,
the
the
performance
implications
of
you
know,
say
and
replication,
and
perhaps
data
protection,
and
also
you
know
some
of
the
things
to
keep
in
mind
in
terms
of
say
things
that
happen
in
the
background
in
in
in
storage
systems.
So
you
know
some
storage
systems
might
have
out
of
bands
compression
or
garbage
collection
or
or
something
like
that
that
can
affect
performance
and
then
a
little
bit
about
benchmarking
tools.
B
So
you
know
a
note
about
level
setting
the
environment
to
make
sure
that
you
know
your
your
things
like
cpu
and
network
in
the
environment
are
comparable
and
then,
finally,
we
were
supposed
to
have
some
section
on
actual
on
on
how
to
actually
perform
a
volume
benchmark
in
the
database
benchmark,
and
this
is
kind
of
where
we're
currently
on
a
little
bit
stuck.
B
So
we
don't
have
we
don't
have
we
don't
have
a
good
information
on
how
to
run
the
benchmarks,
but
at
this
stage
I
am
kind
of
thinking
that,
rather
than
hold
the
document
for
any
longer,
there's
still
value
to
publish
the
document
with
with
this
information,
as
is
with
some
basic,
very
basic
pointers.
To
to
the
benchmarks
that
that
people
can
run,
and
then
we
can
iterate
and
maybe
provide
the
version
to
when
we're
ready
to
to.
B
Actually,
you
know
specify
more
details
on
how
exactly
to
run
those
benchmarks
for
more
examples
and
how
exactly
to
run
those
benchmarks,
because
I
think
understanding
the
performance
and
and
some
of
the
terminology
and
some
of
the
concepts
and
some
of
the
the
the
issues
that
you
come
across
is
is
still
important
and
there's
still
value
there.
B
C
So
I
have
so
in
with
us.
We
actually
so
cncf
got
this
deal
from
packet.net,
which
is
what
they
call
it
bare
metal
in
the
cloud.
B
C
It
will
mean
something,
so
basically
you
get
bare
metal
instances,
but
they
are
in
the
cloud.
So
we
what
we
are
doing
with
that
in
with
this
is
we
have
started
running
nightly
benchmarks,
because
community
has
been
asking
like
how
do
we
like?
How
do
you
make
sure
that
you
don't
introduce
performance,
regressions
and
stuff
like
that,
so
we
just
started
doing
that
just
like
a
few
days
ago,
we
just
got
the
environment
up
and
running,
so
we
run
a.
C
We
have
a
very
basic
system
stuff
that
we
that
runs
every
every
night
and
then
reports
on
a
slack
channel.
How
many?
What
was
the
qps?
What
was
the
latency
and
that
kind
of
stuff?
I
I
don't
know
if
it
is
completely
related,
but
this
is
just
like
a
very
basic
startup
kind
of
thing.
We
hope
to
build
more
features
around
it
and
stuff,
but
I
don't
know
if
that
can
be
used
as
an
example
for
how
to
set
up
a
benchmark
and
run
it.
But
this.
A
C
Obviously
very
test
specific,
but-
and
this
is
more
driven
by
the
fact
that
the
community
has
been
asking
for
it,
not
necessarily
so
it's
very
end
user
oriented
from
that
perspective.
B
B
How
long
do
we
kind
of
delay
putting
the
document
out
to
wait
for
somebody
to
write
some
of
this
in
a
in
a
way
that
end
users
can
actually
consume,
because
I
I
I
I
I
think
we
might
sort
of
be
getting
trapped
in
the
in
the
sort
of
idea
generation
where,
where
you
know
there
are
lots
of
possible
options
for
for
putting
for
for
putting
content
into
the
document.
But
I
want
to
try
and
balance
that,
with
with
timeliness.
C
I
guess
that's
kind
of
what
I
was
feeling
because
now,
just
as
I
was
talking
about
what
we
did
with
retest,
I
realized
that
90
of
it
is
specific
to
the
test
itself.
There
is
not
much
to
learn
from
for
somebody
who
wants
to
run
against
a
different
system.
Yeah.
D
Right
like
I
would
suggest,
I
think
that
the
first
part
of
the
or
the
document
up
until
the
the
tools
section
right
at
the
end
is
looks
fantastic.
To
be
honest,
I
don't
think
you
need
to
you.
You
might
be
kind
of
focusing
on
what's
missing
and
what
took
so
long,
but
but
I
mean
the
document
that
you
went
through,
looks
fantastic
and
I
would
publish
it
as
is.
I
would
actually
make
the
following
suggestion
to
to
remove
the
last
few
sections
because
they're
incomplete
add
a
section
of
links
to
other
information.
D
A
D
One
sentence,
and
then
people
know
that
that
there's
a
volume
two
coming
and
then
hopefully
we
can
actually
kind
of
set
a
timeline
and
and
and
have
a
target
date
for
that
by
you
know
whatever
it
is.
It
doesn't
sound
like
a
huge
job,
but
you
know
having
said
that,
it's
it's
a
year
in
the
making-
and
I
guess
it's
it's
proven
tough.
So
so
maybe
it's
more
difficult
than
it
sounds,
but
but
yeah
it
would
be
nice
not
to
kind
of
wait
another
year
for
that
hopefully
and
yeah.
C
C
C
That
is
something
that
they've
asked
for
the
other
stuff
they've
asked
for
they
asked
for
as
a
as
a
problem
in
with
us,
but
I
think
it
applies
to
all
database
instances,
which
is
version
compatibility.
I
mean
these
go
into
functional
areas,
but
these
are
like
things
that
people
are
really
really
concerned
about
when,
when
adapt,
when
basically
adopting
a
software
project.
B
B
So
you
know
we
could
say
that
this
performance
document
sort
of
covers
off
the
performance
attributes
in
a
little
bit
more
detail,
and
then
we
can
start
working
on
perhaps
a
document
to
cover
things
like
you
know,
availability
and
you
know.
B
D
Yeah,
I
think
those
are
very,
very
important
topics
and-
and
I
I
think
we
should
cover
them-
I
agree
with
alex
they're,
not
necessarily
performance,
but
but
definitely
one
approach
would
be
to
dive
into
each
of
those
four
or
five
areas
that
we
mentioned
alex
and
dive
into
as
much
detail
as
you
have
with
performance
here,
the
one
the
one
comment
on
that
is,
that
there's
kind
of
an
inevitable
overlap
between
these
areas.
D
So
you
know,
if
you
have
something
that
doesn't
have
redundancy,
then
it
tends
to
perform
better,
and
if
you
have
something
that
doesn't
handle
failures,
then
it
also
tends
to
perform
better
and
vice
versa.
D
B
Yeah
in
in
the
performance
document,
we
we
specifically
sort
of
highlight
that
you
know
things
like
the
way
your
data
is
protected
and
your
availability
and
and
consistency,
for
example,
are
all
big
factors
in
in
how
your
performance
attributes
work.
So
so
we
kind
of
point
out
those
things,
and
I
think
that's
actually
that
that's
what
the
overlap
is
covered
quite
well
in
the
original
landscape
documents.
B
But
but
yes
you're
right,
it's
it's
we
I
I
want
to
try
and
avoid
having
sort
of
lots
of
circular
references
if
you
see
with
any
yeah,
that's
a
good.
D
Point
I
think
that
the
original
document
did
deal
with
it
well
and-
and
maybe
we
just
dive
into
each
area
relatively
in
isolation
and
just
refer
back
to
the
document-
remind
people
that
these
things
all
influence
each
other.
But
this
particular
document
is
squarely
about
performance
or
durability
or
reliability
or
whatever
it
happens.
To
be
sounds
like
a
great
idea.
B
Makes
sense
all
right,
then
so
I'll
I'll
I'll
send
the
link
out
I'll
send
the
link
out
to
the
to
the
mailing,
and
hopefully
we
can
we
can.
We
can
get
some
movement
on
that.
Okay,.
B
Yeah
I
agree.
Thank
you
thanks
nick
and
and
thank
you
for
your
input
as
well.
All
right
then
so
we
have.
We
have
one
other,
a
couple
more
items.
I
I
just
wanted
to
to
bring
up
so
the
the
tikv
project.
B
We
had
completed
the
due
diligence
on
this
and
it
was
going
through
the
toc
votes
and
we
realized
that
there
was
a
potentially
well
not
potentially,
there
was
a
core
repo
that
wasn't
covered
by
the
initial
due
diligence,
so
the
decision
was
taken
between
the
the
toc
and
the
project
to
to
work
on,
including
that
that
additional
repo
into
the
due
diligence
pr
and
then
to
kick
and
then
to
restart
the
the
voting
process.
B
So
so
that's
I'm
just
I'm
just
sort
of
highlighting
this,
because
we
had
a
few
discussions
offline
during
this
week.
Aaron
was
there
anything
else
worth
capturing
from
that.
B
No
so
so
tikv
had
has
a
dependency
on
something
called
a
placement
driver,
the
repo
is
called
pd
and
that
repo
is
also
used
by
tidb,
which
is
obviously
you
know,
a
separate
project,
and
so
it
hadn't
been
kind
of
bundled
into
the
into
the
original
tikv
due
diligence.
So
the
the
the
project
team
have
sort
of
said,
look
in
order
to
kind
of
remove
any
of
these
concerns.
B
They'll
they'll
bundle
the
the
placement
driver
repo
into
into
tikv
and
and
and
under
the
same
sort
of
governance
structure,
so
that
it's
it's
it's
just
part
of
the
part
of
the
same
thing
from
a
from
an
ip.
B
When
we
did
all
the
presentations
and
and
when
we
did
all
the
reviews,
it
was
just
assumed
that
that
pd
was
was
actually
part
of
the
part
of
the
submission,
and-
and
you
know
honestly,
it
was,
but
we
we
just
needed
to.
I
guess
formalize
formalize-
that.
B
So
so,
hopefully
it's
not
a
big
deal.
The
the
the
other
item.
I
I
wanted
to
to
quickly
discuss,
and
you
know,
hopefully
this
doesn't
won't.
Take
too
much
time.
Derek
moore
has
who
had
previously
presented
the
proviga
project
is
looking
is
looking
to
to
move
forward
with
the
due
diligence
to
proceed
with
an
incubation
submission.
So
just
as
a
sort
of
mental
refresher
praviga
is
a
it's
a
project
which
is
currently
sponsored
by
by
dell.
They
have
it's
it's
a
it's
a
streaming
storage
product.
B
It
has
some
similarities
to
kafka,
for
example,
and
it
has
it.
It
also
has
some
some
sort
of
message:
bus
similarities
to
to
nats.
B
If
you
haven't
read
the
the
incubation
proposal,
you'll
be
it's
it's
it's
worth
reading
that,
because
there's
there's
actually
quite
a
lot
of
useful
detailing
there.
We
we
all
thought
it
was
a
great
project,
and
you
know
the
presentation
went
particularly
well
and
we've
we've
recommended
to
to
move
forward
to
the
to
the
dd
stage
with
the
toc.
B
I'm
waiting
for
one
of
the
tse
members
to
to
to
step
forward
to
do
the
to
work
with
us
on
doing
the
dvd,
but
but
I
guess
we
also
need
to
figure
out
who's
going
to
work
on
the
dd
from
from
our
end.
So
so
I'm
I'm
happy
to
to
to
help
out
myself,
but
I'm
I'm
quite
I'm
quite
time
limited
over
the
next
couple
of
weeks.
B
So
I
was
wondering
if,
if
there's
somebody
else
who
could
help
out
with
with
the
due
diligence
process
as
well
for
what
it's
worth,
the
the
you
know
the
proposal
we
I
I
had
been
working
with
derek
on
the
on
the
proposal
for
and
iterated
a
few
times,
so
the
proposal
is
actually
very
strong
and
already
covers
things
like
you
know,
end
users
and
use
cases,
and
things
like
that.
B
D
B
Yeah
exactly
you
know
that
said
like,
like,
I
said,
a
lot
of
the
a
lot
of
the
criteria
for
for
incubation
were
already
covered
in
a
fair
amount
of
detail
in
the
proposal.
So
so
I
don't
think
I
don't
think
there's
a
lot
of
work
to
to
sort
of
document
or
dig
into
into
the
detail
required.
But
you
know
we.
We
have
to
make
sure
that
we've
we've
covered
all
of
the
all
of
the
areas
based
on
the
guidelines.
Documents.
B
So
yeah,
if,
if
I
I
don't
know,
maybe
you
know
luis
or
or
origin
or
sugu,
you
might
be
interested
in
in
helping
out
we
we
can
maybe
try
and
make
a
go
of
it
over
the
next
couple
of
weeks.
B
D
B
Yeah,
so
maybe
maybe
if
we
can,
if
we
can
commit
to
start
working
on
this
say
in
in
september,
sometime
or
or
the
beginning
of
october,.
B
Yeah
all
right
great,
let's,
let's,
let's
all
right,
let's
commit
to
working
on
it
in
september-
I
I
should
have
more
time
as
a
back
fill
in
september.
But
again
you
know
if,
if
anybody
else
is
available
to
help
at
that
point,
that
would
be
useful
too.
D
H
D
Okay,
yeah,
I
mean
we
can
probably
work
around.
You
know
within
reasonable
limits,
but
if
you
told
us
you
could
have
it
done,
you
know
at
the
end
of
october.
I
think
that
would
be
good
and
if
you
told
us
you
could
have
it
done
in
the
second
week
of
november.
I
think
that
would
be
better
than
not
having
it
done
at
all
for
sure.
B
Awesome
thanks
so
much.
I
think
that
was
the
last
thing
I
had
on
the
on
the
agenda
for
for
this
meeting
today.
So
unless,
unless
anybody
had
any
other
things
they
want
to
raise,
we
we
get
15
minutes
back.
I
B
I
B
Yes,
sorry,
that's
a
very
good
point.
I'm
I'm
sorry!
I
didn't
mean
to
mention
this
so
about
a
week
ago.
I
I
believe
open
ebs
project
that
is
currently
a
a
sandbox
project
has
put
in
a
a
proposal
to
to
move
forwards
to
incubation,
so
we're
we're
currently
looking
to
put
to
put
an
open
ebs
presentation
on
the
agenda
for
for
the
next
meeting
on
august
26th.
I
J
Everyone
hi
alex
this
is
janice
hi,
oh
hi,
janice,
hi,
so
yeah.
I
just
wanted
one
quick
question,
so
we
submitted
it
as
well
like
two
two
three
weeks
ago:
the
form
for
the
sandbox
project.
Yeah,
I'm
just
wondering
what
is
the
next
steps?
The
process
just
what's
happening
next.
J
B
Data
lifecycle
project.
Yes,
so
if,
if
you're
making
a
a
sandbox
submission,
then
the
the
toc
have
a
new
process
where
they,
where
they
vote
on
sandbox
once
every
month.
I
believe
the
best
person
to
the
best
person
to
speak
to
is
is
amy
perrin
off
at
the
cncf,
but
you
should
also
be
able
to
look
at
the
the
current
status
in
the
guitar
repo
for
for
sandbox.
So
there
is
a
let
me
I
mean
in
fact.
Let
me
just
quickly
look
that
up
now.
B
B
This
is
the
this
is
the
the
current
board.
I
don't
see
it
on
there
I'll
double
check
with
amy,
but
effectively.