►
From YouTube: CNCF SIG-Storage Meeting - 2019-06-12
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Think
we
should
be
good
to
start
so,
just
by
just
by
way
of
introduction,
we
have
the
thing
from
JD
who
are
going
to
present
their
distributed
to
our
system
projects
today,
but
before
we
do
that
I'd
just
like
to
go
through
a
couple
of
quick
logistical
items.
So
as
you
as
you
may
be
aware,
the
storage
working
group
has
converted
to
a
stake
and,
as
part
of
that
process,
we're
going
to
be
creating
new
CN
CF
list,
server
mailing
lists
and
will
probably
be
renaming.
A
The
the
Xoom
call
and
I'm
not
so,
and
maybe
a
couple
of
other
little
things
like
that.
So
I'll
make
sure
that
for
the
next
couple
of
meetings,
I'll
email,
boats,
mailing
lists,
both
sort
of
all
the
new
but
probably
at
in
July-
will
switch
over
completely
to
the
to
the
new
mailing
list.
So
just
let
me
know
if
you
have
any
queries
or
or
questions
than
that,
I
think.
A
A
B
B
Oh
so
that's
static,
so
Chewbacca's
is
a
is
a
distribution
system
designed
for
collaborative
apps,
so
country
is
a
is
writing
in
use
to
support
more
than
100
application
services.
Learning
on
JD's
public
his
platform
to
manifest
is
a
is
we
open
source
being
like
a
three
months
ago,
is
under
the
Apache
2.0
license?
B
It
has
a
really
small,
codebase
kind
of
like
pencils,
nice
and
cold
reading
goal,
and
we
are,
we
are
in
the
posters
of
preparing
the
proposals
to
the
sense
of
sandbox
to
bypass
this
file
system
is
kind
of
the
first
open
source
projects
in
a
installed
area
for
JD.
We
have
a
publication
on
signal
2019,
which
will
be
held
at
the
end
of
this
month,
I'm
sedan.
B
So
so
so
it's
kind
of
a
cool
project
for
for
us
to
to
to
start
into
the
open
source
area
from
the
College
website.
The
the
fastest
means
that
has
ready
being
integrated
Ouisa
with
the
CFCs
III,
and
we
also
have
a
talker
compose
or
deployment
available
there.
The
look
and
the
rancher
integration
is
a
it's
on
our
to-do
list.
C
A
B
Yeah,
yes,
okay,
yeah,
because
we
just
that
we
just
open
source
it
like
three
months
ago
so
doing
the
three
months.
Rent
is
too
much
time
slot
companies
we
only
the
members
in
JD
has
responsibility
to
maintain
the
this
entire
open
source
project.
But
we
are
looking
for
external,
like
members
and
contributors,
yeah
understood.
B
D
B
So
so
tube
RFS
is
not
the
first
author
owning
like
these
devices
and
that
that
can
be
used
for
college
acts.
So
actually
we
have
four.
So
many
like
options
there.
For
example,
we
have
a
Google's
Colossus
516,
which
is
a
kudos
in
house,
gives
you
a
fascist
incantation
that
works
together
with
Bach
on
the
open
source.
So
you
social
side,
we
have
self
as
FS
and
grasshoppers
together,
wins
bunch
of
odd
open-source
projects
from
the
corral
found
a
public
cloud
site.
B
B
We
also
have
several
like
10
points
and
it
starts
one
from
the
fundamental
tendency
feature,
because
my
fancy
is
really
a
nice
feature
for
for
any
like
perfect
for
companies
that,
once
we
have
Hawkeye
say,
shall
destroy
the
infrastructure
for
different
applications
and
services
in
order
to
cut
down
the
storage
cost
right.
So
so
so
it's
really
nice
feature
and
must-have
feature
for
us,
but
but
it
also
cleans
up
or
several
questions
like
it
brings
up
the
need
for
a
general
purpose.
B
This
integration
among
intensive
container
pen
form,
so
we
have
lots
of
other
sort
of
like
different
applications
and
services,
which
can
all
gather
on
the
same
container
platform.
So
so
these
make
cause
like
issue
in
large
number
of
files
that
we
want
to
install.
Then
how
can
we
store
it?
Haka
means
scale
it
because
the
transitional,
like
five
six
constitutions,
like
I,
should
be
FS
using
a
single
master
to
store
the
format.
Data
will
no
longer
work
for
us
a
for
app.
B
Our
problem
is
like
how
can
we
handle
the
capacity
extension
because,
like
a
five
or
four
years
ago,
we
have
just
like
up
to
pinion
product
images,
but
now
the
nice
number
is
one
tree
name,
and
this
number
is
increased
like
a
by
100
every
day.
So
we
have
a
restream
like
like
why
faster
increase
meant
for
all
this.
These
are
files
together
with
this
problem.
Is
they
come
because
of
this
large
slam
of
files?
We
don't
want.
You
have
a
data.
B
Migration
happened
every
day
during
the
capacity
expansion,
which
is
really
a
pain
for
us.
The
last
the
last
problem
is
a
is
to
find
a
using
existing
solution,
the
file
system
solution
that
can
be
easy
configured
and
maintained.
We
optimize
the
performance.
We
have
a
hard
time
to
find
such
a
solution
for
us
that
can
be
used
in
production.
B
B
Chewbacca
fest
provides
a
monkey
tendency
support
with
the
public
volumes.
It's
a
it's
a
society.
It
has
a
general
purpose
tool
engine
that
can
accommodate
different
by
access
patterns
and
the
data
from
the
file
sizes.
It's
is
highly
scalable
with
excellent
performance.
From
the
scalability
perspective,
we
we
employ
a
separate
metadata
cluster
to
store
the
file
metadata
and
from
the
performers
class
back
people.
We
we
have
several
optimizations
such
as
relax.
B
B
B
B
We
also
have
have
a
hawk
as
application
process
learning
young
on
the
container.
What
we
call
the
client,
the
client
has
a
fuse
interface,
which
we
call
what
we
call
the
function:
interface
on
user
space,
it
rents
and
hurry
in
the
user
space
with
its
own
cash.
So
catch
a
bunch
of
things
here
in
order
to
have
a
price
performance
such
as
the
Dinos
or
mandated
informations,
or
would
create
new
files,
and
the
leadership
are
informations.
When
you
want
to
write
files
to
define
the
storage
knows,
yeah.
A
Just
a
couple
of
questions-
and
you
know
if
you're
going
to
cover
these
in
maybe
additional
slides,
feel
free
to
sort
of
the
ferrets
till
then,
but
it
would
be
good
to
get
a
bit
more
of
an
understanding
of
you
know.
How
does
the
metadata
scale
you
know
is
it
is
it?
Is
it
sure
that
is
it
replicated,
etc,
and
and
similarly
for
that,
for
the
data
nodes
and
then
also
I'd
like
to
get
a
bit
more
of
an
understanding
on
the
on
the
client
side
of
things?
B
Sure
sure
I'll
give
some
kind
of
form
what
it
has
here
and
later
so
yeah
perfect.
Thank
you
yeah,
so
yeah,
so
so
so
I
will
be
next
week.
It's
been
each
component
in
a
more
detailed
fashion.
So
so,
firstly,
is
the
metadata
subsystem.
So
this
this
is
really
like.
A
cut
is
to
be
memory.
Storage
for
the
validator.
B
The
structure
is
classical
is
like
a
gesture,
has
a
sense
of
Munoz
and
the
each
mandals
has
has
a
center
of
mathematicians,
but
the
plot
on
each
Matic
addition
is
actually
is
indexed
by
by
the
be
treaties,
so
we
have
for
all
the
class
of
my
data.
It's
actually
contains
like
a
bunch
of
my
nose
and
and
the
entries
right.
B
What
we
do
the
repetition
is
its
performance
is
performed
in
terms
of
the
math
operations
and
we
use
the
mantra
to
introduced
Hadleys
or
has
because
each
each
memory
loss
can
have.
My
car
hundreds,
hundreds
or
thousands
all
four
of
mathematicians.
If
we
want
to
do
that,
do
the
roughly
based
replications,
we
all
have
too
many
like
overheads
or
communication
communication
over
has
between
each
each
physical
server
or
physical
knows.
So
what
we
do
is
like
the
employee.
The
man
could
ask
you,
could
you
are
hierarchical
like
fasting
of
those?
A
B
A
And
therefore,
are
they
only
snapshot
it
and
unwritten
to
this
periodically
yeah
yeah,
yeah,
okay
and-
and
you
know
so
so,
how
does
that
work
in
terms
of
sort
of
maintaining
consistency
across
the
nodes,
so
so
I
understand
obviously
you're
using
raft
to
ensure
consensus
between
the
different
replicas,
but
you
know
say:
if
there's
I
don't
know
a
power
outage
or
something
like
that.
What's
what's
the
maximum
sort
of
recovery
points,
that's
that
that
you
have
to
go
back
to.
B
A
A
E
This
is
Shirin
speaking.
Maybe
I
can
answer
that
question.
Actually,
this
consistency
is
guaranteed
by
raft
rock,
so
persistent,
it
is
not
like
periodically
guaranteed
my
snapshot.
Every
request
shall
be
every
rider
request
to
meta.
Node
requires
the
rough
drug
to
be
consistent
and
then
the
meta
node
will
return
a
success
response
to
the
client.
So
this
rafter
rod
is
persistent
before
it
can
grind
to
get
her
response.
Successful
response,
perfect.
A
B
B
Yeah
yeah,
so
it
can
eat.
Of
course
it
can,
but
at
the
point
is
like
a
1,
we
distribute
the
metadata
across
different
nose
is
like
how
we
always
enjoy
it.
I
like
a
balanced
out
what
we
call
the
reflection,
utilization
I
base
the
data
placement,
so
we
always
want
trying
to
chew
to
balance
the
workload
of
course
across
all
these
metadata
knows
so
so
that
we
have
less
chance
to
have
this
kind
of
micro
cutting
for
the
hotspot.
B
Yeah,
okay,
so
one
of
the
important
optimizations
that
we
have
for
for
the
metadata
operations
is
why
we
couldn't
relax
the
atomicity.
The
problem
is
that
come
the
I
knows
and
the
entries,
because,
as
I
said,
that
we
want
to
have
balanced
the
poconos
so
that
I
know
and
do
entries
of
the
same
file,
you
can
be
located
on
different
metadata
notes.
B
So
this
caused
the
problem
because
then,
ideally
IV
metadata
operations,
they
probably
will
need
a
distribute
transaction,
something
that
is
to
ensure
automaticity.
If
not
there's
a
chance,
we
can
cause.
We
can
create
like
one
because
often
I
knows,
which
is
kind
of
an
inode
that
has
no
associate
the
entries.
B
When
this
happens,
it
will
be
hard
fast
before
the
memory.
So
that's
basically
the
problem
and
the
decision
is
like
a.
We
have
kind
of
trade
off,
because
we
want
to
have
a
balanced
were
close
to
in
this
deck.
This
kind
of
hotspots,
so
so,
eventually,
which
one
we
choose
is
to
to
have
a
carefully
designed
immediate
operation.
B
B
B
Here's
a
finger
shows
how
the
basically
layouts
or
for
data
partition,
as
you
can
see,
it,
has
different,
like
organizations
for
for
large
files
and
small
files,
so
for
a
large
file
is
actually
asked
a
list
of
extent
but
for
small
files
is
we
packed
multiple
small
files
into
a
single
extent?
For
all
my
the
storage.
B
The
things
is
that
one
of
the
things
that
we
want
to
emphasize
is
a
is
a
is
like
a
one
with
db2
photos
that
small
files
is.
The
strategy
is
quite
different
from
large
files
because,
because
we
don't
want
me
to
employ
employee
and
explicit
GC
or
maintain
our
own
GC
mechanism,
so
what
we,
what
we,
what
we
do
is
is
to
employ
the
controller
mechanism,
what
we
call
it
much,
but
what
one
means
like
we
just
call.
B
A
So,
just
just
to
understand
this,
so
in
a
scenario
where
you
have,
for
example,
lots
of
small
files
which
you're
appending
to
does
does
this
trigger
sort
of
I
mean?
How
does
that
work?
Do
you
have
to
create,
like
different
extents
doors,
to
take
all
the
appends
or
or
do
you
have
to
sort
of
punch
holes
and
undo
this?
This
internal
garbage
collection
somewhere
no.
B
B
So
so
for
small
files,
so
that's
the
kind
of
an
advantage
that
we
have
is
like
a
we
do.
The
pre,
pre
location,
I
thought
based
on
the
the
I,
the
hoc
I
say
we
have
a
relocated
space,
just
the
reserved
for
personal
files.
So
it's
a
small.
If
it's
a
small
file,
we
just
write
those
small
small
files
into
those
that
we
allocated
extents
and
for
a
large
files.
A
Okay,
yeah,
so
so,
for
example,
you
know
with
these
sort
of
file
systems
when
you're,
when
you're
updating
the
metadata,
like
that
the
the
pathological
abuse
cases
is
something
like,
for
example,
you
know
a
large,
complex,
git
repo,
where,
especially
if
multiple
people
happen
to
share
the
same
repo
or
whatever
you
know
you
get
into
the
scenario
where
you're
updating
these
little
files,
all
the
time
and
you're
having
to
do
lots
and
lots
of
little
metadata
changes.
How
does
that?
F
B
Yeah
so
the
scenes
these
are
you
song,
because
because
our
updates,
like
what
you
say,
updated
small
files
right,
so
it's
all
happens
like
how
could
I
say?
Oh
overwrite
happens
in
place,
so
there's
no
need
for
us
to
to
update
the
metadata.
So
that's
the
first
thing.
Maybe
we
will
write
a
file.
B
We
will
try
to
do
this
screen,
which
the
files
that
that
needs
to
be
appended
or
the
file
is
needs
to
be
overwrite
it,
but
all
right
to
the
part
when
we
do
the
row
wise
yourself,
because
the
overwrite
all
does
not
change
off
sense
over
the
portion
of
the
files
already
been
written.
So
there's
no
difference
to
just
update
the
that.
The
mandate
which
is
saves
lots
of
like
complication,
has
so
that's
the
things
we
did
for
for
optimizing.
So
kind
of
a
hog.
Has
a
storage
I'm,
not
sure.
A
B
B
Yeah
for
for
our
buildings,
we
still
have
like
the
most
most
common
scenario,
still
like
a
sequential
write
of
the
large
house,
but
we
do
require
some
kind,
some
types
like
copy
files
or
small
egg
product
images.
We
need
to
change.
So
will
we
need
to
have
this
kind
of
feature
with
all
five
components?
That
is
not
a
type
the
most
important
norm.
What
the
majority
of
the
use
cases
that
fall
for
class
yeah.
A
A
You're
appending
to
to
these
to
these
large
files
got
lots
of
sequential
writes.
Do
you
do
you
try
to
minimize
the
updates
to
you
know
the
I
know
and
in
terms
of,
for
example,
things
like
the
file
size
or
the
last
modified
time,
for
example,
to
minimize
those
there's
metadata
updates
and
to
stop
the
metadata
becoming
the
bottleneck.
E
We
didn't
consistency
for
this
scenario.
We
strictly
adhere
to
positive
semantics,
because
the
only
is
seeing
we
guarantee
persistence
by
F
Cinco
system
call.
So
if
the
upper
apps
they
are
invoking
I
think
then
we
can
guarantee
that
the
data
is
consistent,
a
persistent,
but
if
the
upper
application
just
write
a
lot
of
data
and
we
append
these
two
large
fails
and
we
update
method
bit
method
node
upon
receiving,
have
seen
system
call
two
seconds
every
two
seconds
periodically.
So
we
strictly
and
here
to
party
semantics,
got.
A
E
C
E
B
B
Okay,
so
I
want
to
talk
about
a
scenario
where
reputation
strategy
in
alpine
system.
What
do
you
mean?
The
scenario
rather
efficient
is
that
we
do
the
application
based
on
difference
right
scenarios,
basically
for
sequential
right,
we
improve
what
we
go
to
primary
backup
based
replication
for
other
rights.
We
employed
a
monkey
raft
which
is
a
raft
based
replication,
so
why
we
have
these
kind
of
two
particles.
B
We
pass
and
the
launch
of
us
together
so
as
well
as
the
always
goes
on.
We
probably
will
have
too
many
fragmentations
that
eventually
needs
a
kind
of
the
front
of
meditation.
We
should
kind
of
came
for
us.
So
that's
why
we
say
that
is
not
quite
suitable
for
the
overrides
in
this
scenario,
but
on
the
other
hand,
the
overrides
itself,
it's
when
we
use
the
mark
raft.
B
It
can
resolve
this
problem
because
we
no
longer
we
we
no
longer
needs
an
easily
this
kind
of
a
nuclear
system,
our
Swach
the
data
structure,
to
to
have
enough
ornamentation
instant.
We
just
employed
the
raft
lots,
but
the
downside
is
that
it
needs
to
have
the
right
wise.
One
is
for
the
rougher
the
rough
lot
and
the
second
one
is
for
the
in
place
a
bit
so
which
really
hurts
the
components.
B
They
were
writing
back
till
they
were
sending
the
request
back,
responds
back
to
to
the
leader
and
the
neither
was
received
all
the
other
camis
from
the
from
the
followers.
It
will
send
the
the
final
camis
back
to
the
clients
when
the
client
received
these
updates.
It
will
also
send
the
method
accommodates
request
to
the
meta
note.
In
order
to
update
the
information,
is
the
Matabele
informations,
like
the
five
cents
on
the
methanol
decide
for
all
rights?
B
This
is
a
different
scenario
because,
because
in
this
case
the
client
will
fetch
the
leadership
information
from
cache
and
then
just
ascended,
a
rag
across
er
to
the
reader
and
the
rest
pass
will
be
handled
within
the
raft.
So
once
the
rice
had
been
committed,
the
the
reader
will
send
just
and
commit
back
to
the
clients.
B
E
E
C
A
D
E
Yeah
append
append
arise
is
based
on
primary
backup
protocol
Oh.
Okay,
if
I
get
a
failure,
I
will
try.
I
try
to
clients
will
try
to
find
another
data
partition
yeah,
but
for
read,
since
we
we
have
a
draft
protocol
so
reads
will
stick
to
a
raft
leader
and
infer.
There's
no
draft
leader,
a
Christ
will
retry
until
leader
is
elected
okay,
which
is
again
30
seconds
right.
F
E
F
B
Yeah
so
the
next
component,
he
is
a
abusive
manager.
The
newest
manager
manages
the
resources
for
both
data
and
metadata
subsystems.
It
handles
the
volume,
creation
and
patience,
and
he
has
a
kind
of
functionality,
is
to
dynamically
Carol
spent
in
Spain
the
capacity
for
formatted
data
traditions,
so
our
election
or
selections
of
the
traditions
is
purely
based
on
the
reviews,
those
realizations.
So
when
it
comes
to
the
point,
that
of
the
produces
measure,
sense
that
are
automatically
ends
order.
The
traditions
on
the
server
or
our
node
is
about
to
be
full.
B
B
So
a
nice
thing
about
ELISA
than
a
big
capacity
expansion
is
that
we
we
do
not
require
any
data
migration
when
we
expand
capacity,
because
imagine
that
imagine
the
case
that
when
we
have
more,
they
had
to
be
written
to
the
file
system
and
the
wormans
itself
requires
more
data
or
medications
or
more
additions.
It
will
just
the
things
those
partitions
would
be
just
allocated
or
slagheap
based
on
based
on
based
on
these
new
physicians.
So
it's
always
like
ballast.
B
B
So
the
last
thing
I
want
to
say
is
is
about
the
one
key
thing
is
when
we,
when
we
split
the
maca
partition,
that's
the
kind
of
things
that
are.
We
need
to
be
aware,
because,
as
we
as
we
know
that
I
know
this
has
a
unique
I
known
ID.
So
what
we
do
is
like
when
we
split
a
partition
or
we
have
new
partitions
to
be
added
to
to
the
corresponding
volume.
B
We
need
a
mechanism
to
sleep,
the
medications,
data
to
ensure
that
the
I
know
diabetes
on
the
newly
created,
a
condition
for
that
volume
has
unique
IDs,
because,
even
in
all
that
case,
we
have
duplicate
IDs
on
same
volume,
so
that
will
be
the
problem.
So
what
we
do
is
that
we
mean
we
know
when
we
see
a
mathematician
is
about
to
before
we
will
try
to
pre
cut
down
the
inode
IDs
for
data
mathematician.
Let's
say:
Toronto
inode
range
is
from
0
to
infinity
and
at
a
certain
point
we
have
a
matter
pal.
B
We
have
the
maximum
a
node
ID
on
that
partition.
Add
100,
then,
when
we
say
when
we
see
that
these
polishing
is
about
you
before
will
have
new
conditions,
the
new
conditions,
the
the
starting
point
of
the
I
node
ID,
will
be
like
a
100
and
0
a
100
101,
and
in
that
way
we
have,
we
will
not
have
to
placate.
Madam
I
know
IDs
for
for
different
mechanisms
within
the
same
volume.
B
Okay,
so
that's
that's
so
about
all
about
the
negative
details,
kind
of
the
details
of
the
different
components
in
our
file
system.
So
the
last
thing
I
want
to
say
is
about
to
demonstrate
these
is
the
production
use
cases
in
JB,
so
we
have
several.
We
have.
How
do
we
have
more
than
100
vacations
and
services
when
using
using
the
true
bypass?
B
This
includes
some
applications
like
a
short
videos,
like
my
second
bicycle
history
for
tables
and
the
backend
storage
polymath.
So
such
an
edge
base
have
a
shared
storage
for
high
platforms
and
training
services.
We
have
the
the
fastener
used
as
as
that
as
a
storage
folder
for
the
click
box,
also
like
search
advertising
indexes.
We
also
use
the
file
system
as
the
underlying
storage
for
our
amuri
databases,
services.
A
Excellent,
thank
thank
you.
So
much
for
your
presentation.
I
did
have
sort
of
a
couple
more
questions,
if
that's
okay,
so
where?
Where
does
the?
Where
does
the
dynamic
folium
provisioning
sort
of
endpoints
live?
It's
not
in
the
resource
manager?
Oh
yeah,
that's
where
the
CSI
talked
to
basically
yeah.
B
It's
under
on
the
resource
management,
so
the
source
manager
manages
to
all
the
resources
on
the
cluster
for
the
Hawkeye
State
for
resource
allocations.
Whenever
we
have
a
new
volume
to
be
created,
the
resource
manager
will
will
try
to
select
the
constraints
that
update
operations
and
the
mathematicians,
of
course,
all
the
different
mechanoids,
and
then
he
knows
to
select
the
ones
with
with
the
lowest
memory
usage
or
disk
usage
and
assign
those
partitions
to
that
particular
volume.
B
A
Okay,
yeah,
that
would
that
would
be
quite
cool
and
then
in
terms
of
the
deployment
topology.
So
if
I
understood
correctly,
the
the
clients
can
effectively
be
running
in
containers
and
and
and
they
can
be
orchestrated
and
they
they
have
a
fuse
file
system
that
that's
that's
effectively
talking
to
to
to
the
data
servers
in
the
metadata
servers.
But
the
the
fuse
mounts
is
that
also
handled
through
a
kubernetes
CSI
driver
or
or
is
that?
Is
that
something
that
that's
implemented
through
a
sidecar
or
something
like
that?.
E
A
A
B
A
A
C
Just
actually
I
was
looking
at
your
CSI
code.
This
is
Lewis
and
then
I
said
I
noticed
that
you
copied
some
of
the
what's
called
CSI
common
code.
I
suggest
you
remove
that
code.
We
actually
wanted
to
remove
that
code
about
a
year
ago,
and
it's
still
here,
you
should
directly
create
the
interfaces
in
goal
and
for
the
output
of
the
G
RPC
and
not
bother
with
that
that
code
that
was
created
for
something
when
we
first
started
the
project.
So,
okay,
if.
C
Around
CSI
implementation,
let
me
know
we
ping
me:
okay,
yeah!
Thank
you
and
then
one
more
question
was
around
caching
validation
for
you.
You
were
showing
some
caches
that
were
being
used
at
the
clients.
Yeah
and
I
was
just
wondering
when,
when
is
it,
you
go
to
undo,
cashing
validation
for
for
clients
and
other
nodes.
B
E
C
B
A
B
So
so
that's
what
we
haven't
done
yet
so
how
many
just
at
least
parents
in
time
and
the
if
he's
not
expired
with
just
a
fraction
from
cache
and
something
wrong
with
the
waste
item
in
the
cache.
We
would
probably
sense
that
later
on
doing
the
communication
and
then
we'll
update
the
cache.
That's
how
come
yeah
yeah.
C
Yeah
yeah
I
completely
understand
it.
Excellent
you
know,
running
distributed
file
systems
is
probably
one
of
the
hardest
things
there
is,
so
I
always
find
it
curious
to
see
the
implementations
and
how
to
deal
with
these
issues.
Well,
thank
you.
Yeah.
A
Just
echo
that
I
think
these
are
sooo.
Beautiful
systems
have
so
many.
So
many
challenges.
It's
it's
it's
I.
It's
always
a
set
of
compromises
that
you
have
to
agree
on
and
I
I
think
this
has
some
interesting
architectures
and
usage
patterns,
so
the
the
the
the
one
last
one
last
question
I
was
going
to
ask
about.
This
is
the
licensing
model,
so
if
you
could
maybe
just
spend
a
minute
talking
about
the
repos
and
how
they're
laid
out
and
if
there
are
any
dependencies
and
and
what
sort
of
licensing
licensing
restrictions
there
are?
Yes,.
B
Using
the
Apache,
2000
or
license
I
was
maintained
under
these
license
and
we
have
several
like.we
companies
like
we
have
several
libraries
depends
on
other
license
other
than
the
Apache,
but
is
it
just
one
component?
We
already
expand
that
in
our
our
license
in
the
report.
Yeah.
A
Okay,
alright,
that's
that
that's
that's
cool
I,
think
that
that
sounds
great,
so
I
think
we're
coming
up
to
sort
of
the
end
of
the
hour.
So
unless,
unless
anybody
has
any
questions,
I'd
like
to
take
the
opportunity
to
thank
the
JD
team
for
the
presentation,
if
it
was
possible,
it
would
be,
it
would
be
nice
to
to
share
the
presentation
if
that
was
possible.