►
From YouTube: CNCF Storage Working Group Meeting - 2018-02-28
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
C
B
B
All
right,
this
looks
pretty
good.
We
have
about
20
people,
ships
to
be
about
average
for
the
group.
So
thanks
everybody
for
joining
this
morning.
We're
continuing
this
by
week
meeting
with
a
presentation
from
open,
EBS
so
trying
to
share
amongst
the
storage
community,
all
the
different,
interesting
projects
and
things
that
are
going
on
and
and
how
they
relate
to
cloud
native.
So
there's
two
things
on
the
agenda
this
morning.
One
is
this
presentation
and
then
the
second
is
to
discuss
the
swg
presence
at
Q
Khan
in
Copenhagen.
B
We've
actually
secured
a
few
sessions
there
and
wanted
to
open
up
for
discussions
right,
but
everybody
to
figure
out.
You
know
what
what
we
all
want
to
do
and
how
we
can
work
together
to
present
some
topics
to
the
audience
at
the
conference.
So
we'll
do
that
at
8:30,
but
for
now
I'm
gonna
hand
it
over
to
Karen
and
to
Geoffrey
to
present
on
open
EBS.
C
So
give
a
quick
snapshot
of
like
the
background
of
open
EBS,
it
was
started
by
Maya
data
in
the
early
2017.
It's
still
under
active
development
phase.
Right
now
we
are
walking
on
20.6
release.
What
was
interesting
for
us
is
people
were
able
to
resonate
with
the
idea
and
then
in
the
slack
channel
of
open
EBS,
we've
kind
of
seen.
C
People
say
that
they're
already
using
it
and
some
people
even
claiming
that
they
have
put
it
on
production,
there's
been
a
lot
of
optic
in
the
community
activity,
so
open
Ibiza
is
kind
of
tightly
related
to
kubernetes
of
like
tightly
integrated
into
the
communities.
So
most
of
my
time,
these
days
is
on
the
open,
amuses,
our
channel
law.
Sometimes
the
issue
is
kind
of
below
him
to
the
community,
so
I'm
hanging
out
in
the
to
brandy
series,
charm
transfer
kind
of
understand
what
open
epheus
brings
to
the
table.
C
It's
simple
similar
to
calico
and
the
view,
if
you
kind
of
consider
that
as
nology
just
like
how
they
provide
network
services
by
container,
raising
the
network
capabilities
and
then
use
the
underlying
network
infrastructure
of
the
nodes
itself.
Open
EPS
also
container
raises
the
storage
capabilities
and
how
its
storage
service
to
other
workloads
at
the
same
time
using
the
storage
intra
the
hardware
attached
to
the
nodes
itself
right.
C
So
the
main
agenda
for
this
meeting
is
to
show
you
how
open
it
is
fits
into
the
storage
landscape
and
then
also
walk
you
through
a
few
slides
that
show
how
it
works
at
a
very
high
level
and
then
talk
about
some
of
the
problems
that
we
are
working
on
at
openly.
Piet's.
We
think
that
these
problems
are
common
to
any
of
the
other
storage
solutions
that
have
being
worked
on
with
the
continuing
climates
would
like
to
share
that,
and
they
also
accept
any
collaborations
that
we
can
get
out
of
this
meeting.
C
So
my
reader
is
the
company
behind
it.
You
must
have
seen
this
shaman
at
the
Moscow
context
he
was
launched.
C
The
first
category
or
the
most
commonly
used
category
is
the
network
attached
storage.
Here
the
storage
capabilities
are
coming
out
of
the
kubernetes
cluster
over
the
network,
this
external
storage
service-
or
you
know
something
could
be
like
standard
storage,
vendors
or
it
could
be
even
let's
say,
EPS
or
GPD.
They
all
fall
into
this
category.
In
fact,
me
a
free
and
a
team.
At
my
data
we
come
from
the
background
of
developing
one
such
storage
server.
C
You
know,
cost
lives
are
actually
multiple
of
them
in
the
last
decade
or
so
in
the
software-defined
space,
and
also
container
raising
the
storage,
which
are
so
we've
been
using
previously
chains
for
some
time
to
container
ace
the
storage
there.
So
the
other
thing
that's
beginning
making
some
you
know,
find
out
kind
of
finding
it
way
through.
C
Is
the
direct
attached
storage
lot
of
enhancements
going
on
in
there
as
well
for
the
simplicity
of
deployment
itself,
but
this
kind
of
helps
those
applications
that
can
take
care
of
some
of
the
replication
and
I
availability
kind
of
requirements.
So
something
in
between
these
two
is
what
we
call
as
container
attached
storage
in
different
names.
Given
to
this
category.
We
call
it
as
container
attached
storage,
so
here
the
storage
capabilities
themselves
for
continued
erased.
So
we
call
it
as
container
attached
because
the
containers
are
serving
the
data
here.
C
They
make
use
of
the
local
storage
attached
to
the
kubernetes
and
then
expose
the
data
pack
to
the
state
for
workloads.
So,
while
the
capabilities
like
replication,
snapshots,
encryption,
etc
are
taken
care
by
these
storage
containers,
so
the
cool
thing
about
this
container
die
storage.
They
are
themself
for
containers
which
can
be
run
as
workloads
in
kubernetes,
which
kind
of
removes
the
management
or
like
delegates.
They
managed
to
build
capabilities,
like
any
other
were
close
to
kubernetes.
C
For
example,
installation
of
great
monitoring
capability
can
be
used
by
the
same
tools
that
you
used
to
manage
your
standard
workloads.
The
containers
themselves
specifically
deal
with
capabilities
of
storage
management
of
the
disk
management
and
their
data
protection
and
high
availability
aspects
right
just
cover
that
part
there
so
to
kind
of
get
it
claimed.
So,
let's
see
how
open
it
is,
he
knows.
Try.
D
C
All
guesses
right,
so
the
continue
at
SPAD
comes
from
the
source
of
the
storage.
So
looking
at
network
attack
stories
that
that
data
is
allocated
serving
over
the
network,
direct-attached
storage
by
the
disks
directly
attached
and
container
attached
is
firmly
to
say:
data
is
being
served
from
containers.
C
C
So
everybody's
uses,
I
scuzzy
as
a
way
to
connect
to
the
storage
containers,
so
I
scuzzy
initiator,
is
one
of
the
prerequisites,
not
the
only
if
you
use
it
that
we
need
to
have
on
the
modes
and
then
we
can
or
the
cluster
administrator
you
can
configure.
We
have.
The
data
is
stored,
whether
on
a
post
directory
author,
whether
on
the
actual
log
device
using
C
IDs,
which
are
called
storage,
pools
here
and
then
tie
it
into
the
storage
classes.
So
that's
pretty
much
the
setup
part
of
it.
C
Then
you
get
into
the
regular
workflow
of
a
PVC
referring
to
a
storage
class
and
then
the
work
starts
out.
Workflow
starts
by
clicking
in
the
open
if
it's
dynamic,
provisioner
that
will
spawn
in
storage
container-
or
in
this
case
let's
say
it's
a
iSCSI
target,
this
a
standard
kubernetes
deployment
with
the
service
attached
to
that
one.
The
service
IP
is
the
one
that
we
use
in
the
ICICI
QN
and
then,
depending
on
the
policy
attached
to
this
particular
volume
that
could
be
one
or
more
replicas.
C
The
target
takes
care
of
synchronously
replicating
the
data
to
all
these
duplicants,
and
then
we
create
a
nice
kisi
PV
out
of
it
and
hand
it
over
to
the
cubelet,
which
takes
care
of
attaching
it
to
the
pod
and
then
running
the
workload.
So
the
next
few
slides
are
taken
from
a
live
example:
the
different
kubernetes
animals
that
get
generated.
So
this
is
the
PV
object
where
it's
a.
C
E
B
B
B
C
C
B
Yeah
you're
cutting
out
so
let's,
let's
try.
This
I
think
that
everybody
has
the
slides
in
the
meeting
notes,
if
not
I
will
post
it
to
the
channel
right
now.
Let's
try
to
see
if
you
stop
sharing
and
see
if
your
your
voice
gets
better
and
then
everybody
can
drive
the
slides
themselves
to
follow.
If
that
works,
clean.
E
C
So
I
go
ahead
with
the
slide
7,
so
the
slide
7
through
the
slide.
11.
All
really
talking
about
me,
I,
used
to
see
the
kubernetes
animals
that
get
generated
as
part
of
a
new
volume.
So
the
first
one
is
an
open,
EBS
TV,
that's
on
slide,
7
and
then
the
iqn
portal
IP,
is
something
that's
coming
from
a
nice
cozy
target
service,
that's
on
slide
8,
and
this
this
is
really
tied
into
a
deployment
file
that
launches
the
previous
storage
controller
so
openly.
C
C
The
replication
also
comes
from
the
same
images
with
a
different
flag
and
you
use
again
like
the
kubernetes
deployment,
which
is
on
slide
10.
The
replication
factor
is
coming
from
the
decrement
itself,
I'm
used
for
tech
and
the
affinity
note,
alterations,
etc,
to
kind
of
pain,
the
replicas
to
the
nodes
where
the
storage
is
available
and
making
sure
replicas
are
on
different
nodes
right
and
the
other
thing
about.
The
replicas
is,
we
use
the
volumes
itself
to
kind
of
show
like
an
only
way
if
the
storage
is
has
to
be
persistent
depending
on
the
policy.
C
C
C
So
that's
at
a
high
level,
the
different
components
of
open
EBS
and
how
it
works.
Now
talking
about
where
we
are
focusing
on
these
days
since
open
abyss
has
been
put
into
use,
we
are
kind
of
looking
at
some
problems
that
come
in
with
respect
to
I
CCTVs
from
the
cubelet,
whether
it
is
running
in
a
qubit
in
a
container
or
detach
the
still
mounts
that
kind
of
stuff.
C
So
this
is
where
we
are
kind
of
focusing
more
these
days
to
contribute
back
to
the
communities,
community
and
I
think
on
some
general
problems
that
can
help
otherwise
Cassie
based
solutions
as
well.
The
other
thing
is
volume
policies.
We
just
briefly
talked
about
the
replica,
but
I
think
when
it
comes
to
storage.
There
are
a
lot
of
things
that
we
need
to
track,
as
policies
could
be
like
a
snapshot
schedules,
RPO,
maintenance,
etcetera.
C
C
C
E
C
So
long
is
still
going
on.
We
need
to
still
kind
of
fix
that
part.
But
to
that
point,
Gy
is
just
one
storage
engine
in
the
next
couple
of
slides
you,
we
will
talk
about
another
engine
that
also
comes
from
open,
EPS,
okay,
so
I
focus
primarily
on
the
control,
plane
and
hooking
into
the
kubernetes
part
and
Jeffery
works
on
the
storage
engine.
So
he
will
be
able
to
be
more
details
on
that
one
right.
C
So
the
next
one
is
about
the
storage
management
with
respect
to
the
node
port.
So
one
of
the
things
that
we
see
is
yeah.
There
is
some
work
going
on
with
respect
to
the
local
PD,
but
making
those
persistent
storage
options
available
as
some
kind
of
a
first-class
objects.
Just
like
you
know,
some
kind
of
resource
is
something
that's
still
missing
in
the
kubernetes
and
also
how
do
we
dynamically
attach
and
detach
to
storage
engines
so
that
we
don't
have
to
restart
whenever
when
you
use
it
as
local
PD?
C
F
G
Great
so
thanks
Karen.
So
as
mentioned,
my
name
is
Jeffery.
Just
a
real
quick
introduction.
I've
been
in
storage
for
around
10
years.
I
was
actually
at
a
tipping
point
of
my
career
when
I
thought
that
I
wanted
to
get
out
of
storage.
But
then
you
know
containers
we
happened
so
here.
I
am
working
on
storage,
I've
done
development
across
the
whole
stack
of
storage
systems,
kernel
user,
Software,
Defined
and
now
trying
to
do
I,
suppose
a
logical
extreme
with
software-defined
storage,
and
that
is
storage
for
containers
in
containers.
G
So
before
we
went
out
and
design
a
new
storage
system
in
containers,
we
first
stopped
and
figured
out.
Ok.
So
what
are
the
requirements
in
a
containerized
environment?
So
typically
storage
developers
reason
from
the
bottom
up,
and
so
we
really
tried
hard
to
invert
that
that
process
and
when
we
put
their
DevOps
persona,
central
and
reason
from
bottom
down,
and
while
doing
so,
we
noticed
a
couple
of
things
and
I
wanted
to
go
over
them.
G
Real
quick
and
obviously
one
things
that
immediately
show
up
is
the
way
that
we
build
and
deploy
and
put
applications
to
production
has
changed
a
lot
over
the
years.
I
probably
do
not
have
to
tell
this
audience
how
that
works.
You
probably
even
know
better
than
I,
but
it
has
evolved
for
sure.
So
we
believe
that
these,
these
typical
new
application
properties
allow
us
to
rethink
certain
aspects
and
can
potentially
impact
the
design
of
the
storage
system.
G
So
we
decided
rather
quickly
that
we
were
not
building
yet
another
scale
out
storage
system
or
a
distributed
file
system
because
I,
you
know
these
systems
can
be
found.
You
know
in
the
stock
Linux
kernel
today
and
we
did
not
believe
looking
at
the
previous
properties
like
scalability
natively
application
and
reliability,
that
that
wasn't
necessarily
a
good
fit
and
also
distributed.
Storage
is
really
hard
to
develop.
You
know
probably
even
harder
to
debug
in
production,
and
so
sometimes
you
actually
need
special
drivers
to
unleash
the
full
potential
of
these
distributed
file
systems.
G
So
we
wanted
to
try
something
else.
Another
thing
that
we
notice
is
that
the
hardware
side
of
things
really
enforce
change
in
the
way
we
we
do
things
single
nvme
devices
can
do
up
to
450,000
IUP's
and
even
a
lot
faster
these
days
already,
so
we
don't
really
need
to
scale
out
storage
nodes
to
achieve
higher
I/o
is
our
reasoning
similar
for
capacity
as
micro
surfaces
typically
have
a
small
working
set
size.
G
You
know,
and
when
you
look
at
at
a
container
attached
storage
perspective,
the
data
sets
are
relatively
small,
so
I'm
going
to
slide
18
now
and
we'll
get
to
in
a
little
bit
in
terms
of
what
we're
doing
so.
As
Karen
mentioned,
the
open,
EBS
replicas
are
our
pluggable
and
we
like
to
believe
that
there
is
no
one
file
system
to
rule
them
all.
For
example,
a
copy
and
white
file
system
is
great
properties
in
general,
but
not
so
much
if
you
do
a
relational
database
that
have
their
own
right
ahead.
G
Logging
and
things
like
that.
Some
databases
even
want
raw
disk
devices
and
open
the
disk
device
with
all
direct
and
do
everything
themselves.
So
one
of
the
file
systems
that
we're
working
on
is
the
implementation
of
the
emu
engine
of
ZFS
I'd
like
to
point
out
that
this
is
not
running
in
kernel,
but
in
the
user,
space
I
will
go
into
the
problems
that
were
facing
there
and
the
solutions
that
were
applying
to
to
mitigate
that
transformation
to
show
to
speak,
but
what
it
allows
us
to
do
very
quickly.
G
B
G
B
A
G
Yeah,
so
we
can
move
system
workloads
across
the
clouds
leveraging
the
technology
of
ZFS.
So,
as
mentioned
we're
in
user
space.
Oh
sorry,
I'm
slide
19
go
a
little
bit
too
quick
now
so
performance
problem
as
we
move,
Z
of
s
to
user
space
and
unpeel
the
onion,
and
only
graft
a
transactional
layer
of
it.
Linda's
Torvalds,
probably
well
known
to
everybody,
made
a
comment
that
file
system
user
space
are
nothing
but
toys.
G
So
when
we
look
at
the
problems
in
terms
of
performance,
the
performance
bottlenecks,
user
space
are
the
contexts
which
copy
and
copy
out
any
particular
DMA
transfers
and
in
the
other
aspect,
is
that
we
again
observed
is
that,
with
her
and
hardware
trends,
the
kernel
actually
becomes
a
bottleneck.
So
we're
kind
of
reached
this
impasse,
where
we
saying
that
okay
file,
system,
user
space
or
toys,
on
the
other
hand,
kernels,
are
becoming
the
bottleneck
due
to
these
new
technologies
like
hundred
GB
networks
and
MDE
devices
and
3d
crosspoint.
G
So
the
problem,
then,
of
course,
that
we
needed
to
solve
is
okay.
So
how
do
we
change
this?
Il
path
from
kernel
to
the
IOC
without
rewriting
all
the
software,
because
that
will
definitely
not
work
and
the
solution
for
that
is
that
we
looked
into
the
V
host
technology
that
we
borrowed
from
the
virtualization
space,
so
the
IOC,
among
others,
can
expose
different
interfaces.
G
V
host
is
one
of
them
and
the
replica
containers
locally
on
the
node
connect
to
this
IOC
through
the
videos
protocol
using
shared
memory
for
read/write,
so
that
makes
it
zero
copy
and
you
have
NF
need
to
basically
kick
the
other.
If
there
is
data
written
to
the
shared
memory
buffers.
Unfortunately,
there
was
no
virchow's
cozy
library
that
we
could
pick.
They
were
embedded
deeply
into
beehive,
kimu
and
other
hypervisors,
so
we
wrote
one
of
our
own.
G
That
says
you
can
find
this
on
our
open
source
repository
and
the
trick
here
seems
to
be
that
you
need
to
allocate
huge
pages
and
pin
them,
and
then
they
become
suitable
for
DMA,
we're
also
exploring
future
work
with
integration
with
FD,
IO
or
Fido.
If
you
will,
and
in
particular
the
VP
PVC
L
that
allows
us
to
do
a
vector
packet
processing
so
that
the
network
I/o
also
goes
through
the
IOC
on
final
slide.
To
put
it
in
the
picture.
G
This
is
basically
on
the
left
side.
This
is
how
it
looks
between
two
processes:
sharing
data
between
themselves
through
pin
shared
memory.
I,
don't
think
that
this
is
something
new
other
than
the
fact
that
we're
doing
this
directly
in
a
container
without
a
hypervisor
and
the
right-hand
side
is,
is
a
as
a
trimmed-down
picture
of
what
it
would
look
like
on
a
single
node.
So
you
have
the
IOC
that
does
hundred
percent
pulling.
G
Then
you
have
the
app
and
the
target
which
Kieran
mentioned
and
we
replicate
anyways,
which
is
all
defined
in
in
llamo
the
target
rights
to
the
replica.
The
replica
applies
adaptive
point
because
when
we
really
want
to
go
fast,
we
we
can't
afford
the
context
which
is
to
for
a
pole.
So
we
really
do
is
busy
looping.
G
If
the
load
is
low,
we
switch
back
to
a
pole
and
then
we
transform
to
sort
of
speak,
vio,
apply,
checksums
and
and
and
put
the
thing
on
disk
snapshots
and
what-have-you
and
then
submit
the
I/o
back
to
the
physical
disk
to
the
V
host
layer,
and
then
eventually
it
boils
down
to
the
IOC.
So
I
could
not
talk
any
faster
than
this
keeping
in
mind
that
my
native
tongue
is
not
English,
but
with
that
I
am
at
the
final
slide,
and
if
there
are
any
questions,
please
feel
free
to
ask
us.
G
B
G
Well,
so
the
name
was
actually
based
on
the
fact
that
it
is
you
know
it.
It
relates
to
EBS,
so
it
kind
of
rings
a
bell
to
sort
of
speak.
The
fact
that
it's
open,
you
know
also
speaks
for
itself
as
being
completely
open
source.
So
you
know,
to
be
honest,
100%
honest.
The
name
was
already
there
before
I
joined.
G
It
refers
to
the
fact
that
it
is
block
storage
for
one,
and
it
has
strict
ties
to
obviously
how
the
way
that
the
way
that
we
do
cloud
storage
in
general
and
it
yeah
it's
basically,
you
know
the
elasticity
from
block
storage
comes
from
the
fact
that
we
can
spin
up
containers.
So
we
get
it
that
way,
but
yeah.
A
better
reason.
I
would
have
to
come
back
to
you
for
that.
G
G
Partially,
the
plan
is
is
that
we
have
multiple
plug
pluggable
backends,
for
which
long
horn
is
just
one
of
them,
and
we
believe,
as
I
mentioned
Jim,
that
you
know,
different
workloads
require
a
different
type
of
on
disk
formats
to
sort
of
speak
I'm.
So
we
will
have
multiple,
and
these
are
just
the
first
two.
Okay.
E
C
G
We
are
obviously
well.
Obviously,
we
are
very
flexible
in
the
sense
that
we
can
integrate
with
others
relatively
easily,
and
this
is
because
we
have
you
know
virtualized
the
IO
stack
to
sort
of
speak.
So
for
us
it
doesn't
really
matter
to
what
we
write.
It
could
be
a
physical
disk.
This
was
more
ended
towards
local
devices.
You
know
the
director
steps
direct
attached
storage
concept,
but
we
could
just
as
easily
write
to
an
RB
d
volume
from
chef
or
from
Gloucester,
or
what
have
you.
G
It
certainly
does
a
lot
of
orchestration
of
the
storage,
but
eventually
it
doesn't
really
matter
what
you
do.
Eventually,
you
need
to
write
the
data
to
a
disk
right
and
we
give
you
the
freedom
to
choose
what
guess
that
is.
If
you
want
to
write
it
to
a
local
disk,
then
you
can
use
C
store.
If
you
insist
on
writing
into
a
cluster
volume
or
an
RB
devalue,
we
will
not
stand
in
your
way.
That's
basically
the
freedom
that
we
provide
augmenting,
obviously,
the
capabilities
with
snapshots
and
cloning
to
that
local
volume.
B
G
I'm
well,
actually,
the
reason
for
us
obviously
is
to
cross-pollination
with
other
open
source
projects
in
particular,
and
community
growth.
I
think
you
know
in
all
honesty,
I
don't
need
this
anyway
to
come
across
arrogant,
but
we
were
kind
of
this
organic
growth.
So
maybe
Kiran
has
some
other
things
that
motivate
him
at
a
personal
level,
but
to
tie
in
with
with
the
other
open
source
projects
is
the
most
dominant
reason
for
us,
or
at
least
for
me,
maybe
Kiran
has
an.
C
Additional
you
said
it
right:
every
clintons
of
the
interest
is
to
really
carry
the
community
work
on
some
of
the
common
problems.
It
could
be
on
this
project
itself
or
directly
on
the
kubernetes
itself,
and,
yes,
we
playing
with
the
idea
whether
to
submit
to
C
and
C
F
or
not,
and
it
really
depends
on
the
community's
interest,
and
but
the
community
feels
that
this
should
be
submitted
through
science.
Here.
H
So
that
this
is
team,
what
that
wasn't
like
super
clear
for
me,
but
but
I
just
first
want
to
say
it
I'm
glad
to
see
you
guys.
Archer
I've
always
been
kind
of
wondering
who
you
guys
were
and
what
you
were
about
so
I'm,
just
very
interested
to
see
the
presentation
welcome
and
so
just
to
go
back
to
the
CN
CF
thing.
C
G
B
B
Okay,
aside
from
you
know,
starting
a
thread
for
any
extra
questions
or
directly
contacting
the
team
at
open,
EBS,
let's
move
forward.
The
next
item
we
have
to
discuss
for
the
day
is
the
the
sessions
that
we've
secured
at
cube,
comm
to
discuss
to
talk
about
SWT
related
topics.
Ben.
Are
you
out
there
still
yep
I'm
here
cool,
any
preference
on
on
how
you
want
to
proceed
here?
I
was
thinking
about
just
opening
up
the
discussion.
B
F
F
One
thing
I
did
notice:
we
for
storage
sig.
We
only
signed
up
for
an
intro
session,
not
a
deep
dive,
and
it
looks
like
for
the
CN
CF
workgroup.
There
is
both
an
intro
and
a
deep
dive.
Maybe
we
could
offset
it
such
that
storage.
Sig
does
an
intro
session
during
that
time
slot
and
the
CN
CF
workgroup
does
a
deep
dive,
and
that
way
you
have
two
separate
sessions
that
the
same
set
of
folks
can
attend.
Yep.
B
Let
me
take
that
as
an
action
item
to
go
back
to
the
conference
team,
their
committee
and
see
if
we
can
reschedule
the
the
intro
and
kind
of
ideally
would
be
great
to
have.
You
know
more
storage
sessions
at
the
conference
that
lasts
I.
Think
so,
if
we
could
get
them
to
move
it,
that
would
be
cool
yeah.
A
B
F
B
All
right
so
considering
that
we
we've
got
these
three
sessions,
maybe
we
should
start
with
this
face-to-face
because
I
think
that
is
a
little
bit
questionable.
We
secured
this
at
8:20
on
Wednesday
night
I.
Think
a
22,
940
I'm,
not
sure.
That's
timing,
wise.
That
makes
a
ton
of
sense,
I'm
kind
of
curious
to
hear
what
what
the
team
thinks
about
that,
whether
we
should
cancel
that
session
or
whether
we're
gonna
have
enough
people
interested
in
and
enough
stuff
to
talk
about
at
that
time
and
I.
We
think.
B
H
B
A
H
B
I
H
Know
my
experience,
unlike
these
face-to-face,
is
at
least
of
the
Cuban
80
stories.
Ziggy
is
like
what
we
do
is
we
get
a
couple
of
topics
that
are
just
too
meaty
too.
We
need
a
high
throughput
conversation
and
then
we
schedule
those
for
face
the
faces
and
then
they're
often
like
they
exude
that,
like
we'll
make
time
to
discuss
them,
and
so
I
was
curious
to
be
there's.
A
working
group
feel
like
there's
some
really
important
topics
that
we
need
to
talk
about
in
face
to
face
I.
B
I'm
not
sure
I
think
that
you
know
that
the
TOC
is
still
figuring
out.
You
know,
I
think
the
guidance
for
us.
If
you
guys
heard
on
the
last
TOC
call
the
server
list
team
you
know
put
out
their
white
paper
and
Alexis
was
asking
the
TOC.
You
know
what
they
thought
of
it
and
if
that's
what
they
want
to
ask
to
be
used
to
do
and
and
I
feel,
like
short
of
that
guidance
to
us
of,
like
hey,
go,
go
start
working
on
this
white
paper.
E
H
One
one
topic
that
I
think
is
the
high
throughput
conversation
is
like
getting
to
the
bottom
of
like
I.
Don't
think
we
actually
closed
on
the
whole
when
we
say
CNC
have
storage
to
a
group.
What
are
we
referring
to
you
and
I
think
we
could?
We
could
have
a
debate
on
Mike
the
where
the
line
is
between
application,
persistence
and
storage
and
and
what
exactly
you
know,
is
the
scope
that
we're
gonna
be
tackling
in
this
workgroup
right,
I,
wonder.
H
You
know
that
and
I
think,
regardless
of
like
the
taxonomy,
we've
got
to
figure
out
it
like
what
level
of
the
taxonomy
is
a
scope,
because
we
can't
fulfill
on
all
the
stuff
we
promised
to
the
OSI
around.
You
know:
writing
this
landscape
white
paper
and,
like
you
know,
doing
the
rest
of
the
stuff
until
it
sits
in
the
critical.
B
Path,
I
think,
there's,
you
know
multiple
work
streams
that
are
kind
of
going
on
that
impact
that
as
well,
you
know
the
the
TOC
is
considering
reconsidering
the
the
definition
in
the
charter
of
cloud
native,
which
I
think
in
you
know
all
of
the
projects
and
the
positioning
at
liberal
groups
and
how
they
think
about
things.
So
I
feel
like
there's,
there's
some
fundamental
changes
that
are
happening,
which
essentially
you
know
we
need
to
build
upon,
and,
and
you
know,
the
definition
of
cloud
makes
original
a
stuff
like
comes
from
that
work.