►
From YouTube: NFSV4 WG Interim Meeting, 2020-04-29
Description
NFSV4 WG Interim Meeting, 2020-04-29
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
C
A
I
mail
slide
I
think
we
both
mail,
the
slides
to
BB
I've,
assumed
he
was
gonna,
come
on
and
and
show
them
or
otherwise,
we'll
just
you
know,
share
our
screens
individually
screen
sharing
works
better
for
me,
okay,
fine.
D
D
D
A
A
D
One
thing
we
could
try
taste
at
the
once
was
actually
is
presenting,
could
actually
have
the
video
on
because
I
don't
think
we're
if,
if
they
have
enough
pound
capacity
cetera,
because
that
makes
a
bit
easier,
usually
for
people
to
follow
along
the
presenter.
E
F
A
Okay,
since
this
is
now
one
o'clock,
I
guess,
I
have
to
start
with
the
no
violent
ever
tell
everybody
that
hey
whatever
you
say
here,
is
an
IDF
contribution
and
be
aware
of
that
fact.
I
guess
that's
about
it!
That's
not
five
minutes.
We
do
have
five
minutes
for
agenda
bashing.
Is
there
anybody
who
wants
to
say
anything
about
the
agenda?
A
E
E
And
there
does
seem
to
be
a
significant
audio
delay
here.
There's
one
item:
that's
not
included
on
the
agenda
I'm,
not
asking
to
change
that
right
now,
but
I
just
wanted
to
mention
that
I
am
working
on
computational
storage
and
continuing
that
that
work
and
that
does
fall
in
the
category
of
future
work
items.
E
E
Go
have
a
look
I'm
intending
to
update
it
again.
There
are
some
some
ideas
that
I
wanted
to
put
into
it
right
now.
It's
just
basically
a
pile
of
ideas
related
to
computational,
storage
and
NFS,
how
we
might
implement
it
an
NFS.
So
if
you're
not
interested
in
computational
storage,
I,
don't
think
it's
probably
going
to
be
very
interesting
or.
E
H
A
H
A
I
G
G
And
there
is
a
draft
that
I
worked
together
with
David,
most
David
clean
it
up,
but
we
wanted
first
to
present
as
it
starts
and
then
I
will
decide
when
to
post
the
draft,
and
we
would
like
review
of
the
drama,
but
first
we
have
to
discuss.
So.
This
draft
is
a
collaboration
between
myself,
David,
black
and
Christoph
was
kind
enough
to
start
with
me,
the
flight,
so
I
don't
know
if
he's
in
the
call
today,
but
at
least
one
this
one
important
point.
Well,
because
we
you
know
I
use
some
of
his
old
enough.
G
H
G
H
G
What's
the
motivation,
the
motivation
to
add
support
to
new
engineering
transport
protocol
for
Iannetta
nvme
over
fabric
is
a
public
extension
of
nvme
or
non-volatile
memory,
expressed
as
the
input
system
support
them
here
or
fabric.
We
started
to
put
at
least
time
to
several
vendors
to
do
that.
To
access
nvme
based
storage
devices
are
faster
than
others
colleges,
but
connecting
them
using
all
transport
will
be
inefficient
and
it
will
defeat
the
purpose
of
speed
and.
G
G
What
we
are
trying
to
do
with
this
drug
is
extending
the
PNR
first
nvme
right
how
to
extend
the
current
pni
first
from
scuzzy
to
nvme.
That's
the
goal
of
our
drug
transport
used
to
connect
of
nvme
over
public
bed
servers
instead
of
scuzzy
exists
already
and
the
are
behind
in
the
envy
of
public
protocols.
G
You
know
public
transports
could
include
fiber
channel
art.
We
may
I
work,
especially
undocking
PCP,
and
we
are
looking
at
TI
e,
but
it's
not
clear
yet
if
we
want
to
include
that,
although
Christophe
is
interesting,
that
I
reached
him
thinking,
pmfs
needs
nvme
allowed
to
port
for
all
these
new
and
GME
over
public
path.
That's
what
the
goal.
G
So
we
start
from.
We
started
from
pinnacle
Scotty
layout
and
extend
MBA
me
on
that
layer
introduces
nvme
details
for
pinnacle
server
and
client.
There
is
a
difference
in
this
type
of
bird
of
storage.
The
difference
is
that
in
principle
you
can
the
clients
and
the
server's
hosts
being
on
the
same
fabric,
and
they
can
actually
transfer
data
both
way,
which
is
different
a
little
bit
than
before.
You
know,
okay,
so
how
do
you
explain?
G
Pnath
personal
support
and
yummy
PMF
is
Casilla
out
RFC,
eight
one
part
or
allowing
spiritus
cards
to
director
four
miles
to
the
block
storage
devices,
bypassing
the
Indians
right.
That's
the
starting
point:
this
drop
adopts
the
pinnacles
Casa
layout
to
enable
use
of
an
addressable
fabric,
provides
fiber
channel,
our
DNA
or
TCP
artists
for
devices
using
energy,
more
fabric,
enable
implementers
to
start
from
the
peon
across
scuzzy
layout
now,
and
the
nvme
standards,
currently
ten
VA
or
one,
but
one
and
anybody.
Anyone
before
to
implement
peanut
rest.
G
Energy
Emilio
is
the
nvme
over
fabric
transport
specifications.
Poor
furniture
argument
ICP.
We
don't
believe
that
all
the
three
will
be
equally
evenly
implemented
by
implementers,
but
we
want
to
be
sure
that
we
cover
all
the
things.
We
believe
that
the
future
of
Mississippi
again
this
is
initially
what
it
from
Vienna
per
server.
G
So,
first
of
all
requires
the
peer
network
storage
devices
to
support
the
underlying
any
Vimeo
of
a
public
transfer
to
provide
reliable,
nvme
commands
and
data
believer
that
first
critical
point
which
needs
to
be
independent,
architectures
and
good,
with
this
cosmetic,
our
public's
architecture
and
commands
used
by
P
NFS
clients
to
access
via
network
storage
device.
These
comments
should
be
recognized
by
the
family.
Layers
are
shown
in
diagram,
so
you
can
see
in
the
diagram.
G
So
this
is
the
the
current
proposed
configuration
according
to
the
specs
of
envy
me
over
publican
nvme,
that
I
wrote
does
the
or
the
hardest
part
for
me
is
to
collapse
to
see
what
pieces
are,
sticking
and
sorry
I'm,
not
kind
percent
sure
that
I
this
is
perfection,
but
that's
the
current
thing
open.
G
Case
we
want
to
focus
or
as
the
media
so
fast.
Of
course,
it
would
be
reasonable
to
assume
that
we
want
to
use
some
kind
of
our
DMA
transfer
protocol
for
pmfs
and
it
has
defaulted.
The
nvme
port
I
mean
supports
multiple
and
gammy
over
fabric
transports.
If
more
than
one
transfer
is
supported
by
the
underlying
network
public,
so
in
principle
it
could
support
I
warp,
rocky
or
both.
G
The
diagram
illustrates
again
we're
focusing
on
this
3d
stupid
it
just
the
all
first
draw
the
diagram
illustrates
the
layering
of
the
argument
runs
port
and
common,
our
DMA
providers,
I
work
with
eben
and
rocky
be
to
with
the
host
and
MDM
subsystems.
So
you
can
see
in
the
diagram
the
nvme
host
is
connected
to
the
earth.
Daria
me,
our
DMA
could
use
either
I
work
in
ribbon
Araki
through
the
argument
Roderick,
which
again
it's
in
term,
could
be
any
of
them
as
I.
G
The
diagram
shows,
either
of
them
or
all
together
with
different
host,
and
there
are
lemme
our
DMA
transport,
and
this
talks
to
the
endgame
is
something
separate.
Tcp
and
FC
transports
are
not
shown,
but
they
are.
We
will
considering
them
depending
on
you
know.
We
don't
know
yet
what
implementers
who
prepare
so
I
think
we
want
to
keep
these
things
of.
I
G
We
have
to
share
the
draft
and
we
have
auditory
bitrate
is
available,
but
it's
not
to
the
point
that
we
want.
We
are
completely
sure
that
we
put
the
right
thing
all
the
right
things
that
need
to
be
there.
So
that's
why
we
trying
to
learn
a
little
bit
from
the
discussion
and
then
we
will
post
it
and
will
open
for
discussion
again.
G
So
any
of
you
know,
public
allows
multiple
Pinnacle's
clients
to
connect
with
different
controllers
on
the
same
subsystem
right,
so
you
may
have
multiple
hosts,
but
will
be
a
first
clients
that
could
talk
on
the
same
subsystem
to
multiple
device.
Different
end
gears,
an
association
established
between
the
host
and
the
controller.
When
the
host
connects
to
control
admin's.
G
This
obvious
is
established
at
the
beginning
before
you
start
using
it
for
the
transfer
transport
you
have
to
have
this
tree,
creating
the
queues
which
is
maybe
will
be
different
than
previous
medium
and
previous
controls.
The
PFS
client
also
acts
as
an
nvme
host
and
nvme
contours
are
used
as
the
TFS
storage
device.
G
What
means
this
is
because
they
are
Connie.
You
are
saying
fabric
in
principle.
It
is
a
possible
with
the
nvme
fabrics
to
move
either
way,
so
it
could
be
either
using
the
push
or
the
pull
model.
From
the
perspective
of
the
RDMA
connection
and
an
transport
make
connect
to
penetrate
storage
devices
using
different
network
protocols,
as
I've
mentioned
before,
and
different
nvme
over
Harry
transports
and
in
principle
there
should
be
no
conflict
using
multiple.
But
again
this
is
a
part
of
the
world.
G
What
we
want
to
certainly
down
the
in
VM
subsystem,
may
require
a
host
to
use
fabric
secure
channel
and
we
in
bank
authentication
or
both.
So
that's
an
important
point,
because
now
there
is
this
Priya
movement
between
the
clients
and
the
server.
It
is
a
potential
for
clients
which
are
not
you
know,
to
connect
to
connect
directly
and
mess
up
with
the
data
in
the
server
which
did
was
less
possible
before
now.
It's
more
open
for
that.
So
that's
why
we
need
to
be
sure
that
we
implement
the
secure
channel
public
in
this
environment.
G
Okay.
So,
let's
talk
about
the
real
details
of
all
that
identification
appearing
first
khasinau
to
discuss
a
device,
identification
VPD
page,
to
identify
the
device
in
use
by
the
layout,
and
we
will
continue
to
go
on
this
part
and
remember:
public
storage
devices
need
to
provide
analogs,
unique
identifier
based
on
UI
64
and
in
GUI,
be
identified
we'll
have
to
work
out
on
these
details.
We
have
some
ideas,
but
in
the
draft
I
think
we
are
not
yet
sure
what
the
implementers
path
will
take.
G
Uid
intent
if
occation
could
be
added
but
must
use
a
large
lot.
A
new
value
to
avoid
conflict
with
possible
future
Scotty
changes.
So
what's
happening
is
that
we
need
to
prepare
for
a
wider
because,
as
you
understand,
what
connection
between
on
a
large
fabric
could
happen
right.
Okay,
a
large
number
of
hosts
and
the
large
number
of
servers
on
the
same
fabric.
So
we
believe
that
we
may
need
more
more
bit
and
David
correctly.
I
I'm,
so
what
was
formerly
the
ER
ID
is
that
it
turns
out
that
in
the
actual
scuzzy
layout,
the
identifier
numbers
really
nice
corresponds
to
scuzzy
standards,
there's
no
UID
in
the
scuzzy
standards,
and
so
the
trick
is
to
use
an
enum
value
that
stays
out
of
the
way
of
scuzzy,
possibly
doing
something
useful
in
the
future.
We
might
want
I
want
to
pick
up
so.
K
I
K
G
I
was
stuck
in
saying
before
that
that
we
have
a
draft
0-0,
but
we
wanted
to
have
the
discussion
first
to
see
if
you
are
going
on
the
right
direction
and
there
are
updated
draft,
but
the
plan
is
to
have
it
by
June
uploaded.
Of
course
we
can.
If
there
are
sections
that
we
are
interested,
we
could
share
some
sections
again,
it's
not
complete
and
it's
not
perfect.
So
that's
why
we
prepared
to
to
do
the
right
part,
because
we
want
to
get
a
faster
path
to
the
working
group
acceptance.
K
L
I
G
If
you
are
open
not
to
criticize
us
too
much
just
bring
your
opinion
we
can
share,
but
again
I.
We
don't
want
to
confuse
you.
That's
the
problem,
the
biggest
problem
we
have
right
and
I
think
that
it's
very
important
for
many
people.
That's
why
working
group
asked
us
to
take
over
these
draft,
so
I
want
to
respect
that
I
mean
we
want
not
only
me
Scottie
layout,
you,
these
persistent
reservations,
provide
content
right.
G
Both
envious
and
the
PFS
clients
have
to
register
rocky
with
the
storage
device,
and
then
he
has
to
create
a
reservation
on
the
third
device.
We
discussed
a
bit
more
in
the
draft
about
this.
No,
it's
just
sharing
what
what
we're
looking
at
well
now,
if
anything,
individual
systems,
each
system
must
do
the
unique
our
system
is
working
key
because,
as
I
said,
if
you
can
have
multiple
types
of
fabrics
accessing
the
same
storage
same
devices,
it
could
be
making
the
things
complicated.
G
When
the
is
persistent
derivation
or
it's
done,
because
you
know
there
will
be
conflicts
in
pools,
one
I'm
modified
area,
the
NDS
must
generate
a
key
for
itself
and
that's
something
that
it
needs
to
be
in
the
protocol
and
the
key
for
each
of
the
PRS
clients
that
access
discuss
availed
volumes
before
exporting
a
volume.
Oh
that's
important,
because
without
this,
the
access
to
remote,
a
persistent
memory
is
complicated
and,
of
course
the
we
can
of
course
prevent
any
mal
doing,
but
it
is
better
to
have
a
reservation
key
control
by
de
in
years.
G
That's
the
best.
What
I
my
thinking
was.
There
is
a
vision.
Key
applies
to
all
access
by
individual
tentative
plan,
regardless
the
fabric,
regardless
of
volume.
So
it's
that's
why
it's
important
to
have
a
stage
mechanic,
because
otherwise
you
end
up
with,
because
the
fact
that
that
each
of
the
clients
can
access
the
memory
of
every
are
recurrent
and
older.
The
public
makes
more
difficult
frenzy.
G
Or
parenting
specifically,
we
have
some
other
additional
complications
are
common
or
not.
Complication.
Different
reservation
is
similar
to
Scotty
persistent
reservation,
which,
from
the
perspective
is
good,
but,
as
you
know,
the
person
the
machine
is
Kazi
can
only
connect
to
entity.
Advanta
previous
cannot
will
have
to
bear
to
the
protocol
ratio
reservation.
It's
done
course,
but
a
PMF
s,
client,
X
registration.
It
will
be
more
complicated,
multi
cost
reservation,
use
exclusive
access
and
all
register.
So
it's
a
different
a
little
bit
more
bit
more
complex
and
more
comprehensive.
G
G
First
of
all,
this
is
typical.
It's
not
something
new,
but
it's
clear
in
this
case
that
the
client
asked
all
the
out
because
the
data
could
be
accessed
by
the
remote
side.
So
that's
why
it's
important
to
commit
or
not
to
you
know,
to
kind
of
freeze
the
current
situation
when
you
disconnect
future
a
future
get
device
in,
for
course,
may
require
new
pianet
rest
guardian
station,
which
is
imaginable.
G
G
G
I
Open
on
open
our
mind,
okay
at
the
moment,
the
best
on
the
initial
analysis
is
that
it
does
not
fly
Rodney
Fletcher's
used
to
do
is,
if
you
do,
if
you
were
doing
NFS
right
over
our
DMA,
you
issue
an
argument,
of
course
behind
it
to
make
sure
the
written
data
is
stable
before
the
write
the
NFS
right
completes.
It
turns
out
that,
for
for
anywhere
fabrics,
the
are,
do
you
make?
Control
path
runs
the
other
way,
and
so,
when
an
NFS
write
is
done,
what
happens?
I
Is
the
storage
destination
that
right
turns
around
issues,
an
RDM,
a
tweed,
to
go,
get
the
data
and,
as
a
result,
we
don't
need
the
flush
command
in
that
case
and
in
reverse
direction,
there's
no
requirement
for
stable
data
on
and
that
this
clients
there's
no
point
in
sending
in
sending
a
flush
up
with
the
storage
to
the
client
for
the
read
data
is
just
put
here
to
make
sure
we
won't
load
it
down.
So
we
don't
don't
this
one
again.
It's.
G
I
G
Thank
you
for
you,
I'm,
probably
I'm
out
of
time.
So
so
we
have
a
working
group
milestone
and
ya
mean
accessing
the
pinnacles
lay
out.
The
current
date
was
supposed
to
be
August,
but
it
took
a
little
bit
more
work.
First
of
all,
there
were
some
changes
in
the
in
VN
over
fabric
versions,
since
we
started,
and
we
wanted
to
be
sure
that
we
reflect
those.
On
the
other
hand,
we
have
some
thinking
about.
You
know
using
the
TCP,
because
that's
the
preferred
way
the
people
are
talking
about
in
the
industry.
G
So
that's
the
other
thing.
That
kind
of
we
wrote
something
in
the
beginning
and
then
we
stopped,
and
maybe
we
should
drink
emphasize
the
TCP
part.
The
issue
of
drive
would
be
submitted
after
sometimes
after
the
meeting
as
I
said.
It's
not
it's
raw,
it's
not
yet
ready.
But
if
the
group
is
interesting,
we
will
share
and
of
course
will
accept
any
ideas
and
making
just
in
our
confusion,
maybe
that's
the
first
step
will
be
to
discuss,
prevent
any
confusion
expect
to
ask
that's
absurd.
G
E
E
G
And
also
we,
we
used
a
lot
of
drop
right,
I
mean
it.
We
merged
the
draft,
but
it's
we
had
to
it
way
too
much
so
and
also
you
remember
that
the
Christopher
is
not
interested.
So
we
actually
we
collaborate
your
license,
which
is
good,
I'm
very
happy,
but
we
took
on
so
it's
all
going
0-0
and
in
fact
maybe
it
doesn't
make
sense
to
call
it
0
3,
because
0
1
and
tour
expired
for
a
long
time.
So
I
don't
think
that
would
be
useful.
L
Yeah
I
agree
I'm
glad
to
see
that
it's
moving
forward.
We
there's
relevance
to
the
presentation.
I'm
gonna
make
later.
Obviously
I
have
a
comic
slide.
I
want.
L
Great,
no,
it's
fine
I
think
it
is
appropriate
to
retitle
the
document.
I
mean
if
you
were
building
on
that
document
and
intending
to
take
it
to
its
logical
conclusion,
then
maybe
keeping
the
rain
for
now.
It
makes
sense,
but
it
sounds
like
you're,
adding
some
significant
content.
You
know
to
broaden
the
discussion,
so
I
think
it's
perfectly
appropriate.
I
mean
we
have
the
possibility
of
updating
it.
Let's
just
do
the
right
thing,
I
think.
G
G
You
understand
the
protocol
was
just
finalized
in
June
right
or
in
July,
so
I,
don't
I
have
plans
by
the
way
I
have
plans
for
look
at
this
right,
but
in
plans
in
a
sense
that
you
know
right
few
things,
but
not
at
the
level
of
the
protocol.
Yet
so
I
think
that
it
will
get
some
more
work
out
prefer
to
do
something
when
we
decide
which
path
to
go
and
assume
that
if
the
group
is
confident
that's
important,
maybe
when
we
become
working
group
item,
then
I
will
probably
provide
some
implementation.
E
G
I
I
G
E
E
The
first
thing
that
I
remember
when
I
started
to
participate
in
this
working
group
is
Tom
Haines
using
github
to
publish
the
editors
copy
of
his
drafts
when
he
was
working
on
this
and
I
kind
of
adopted,
get
and
then
later
github
and
for
my
own
work
and
then
a
couple
years
ago.
I
was
talking
with
Lars
about
this
and
he
said
oh
yeah.
E
The
quick
working
group
meeting
sometime
in
2019,
probably
ITF,
105
and
I,
saw
exactly
what
they
were
doing
with
this,
and
it
was
a
whole
lot
more
than
just
managing
commits
to
internet
drafts,
and
it
looks
like
this
is
something
that
we
would
want
to
adopt
to
manage
working
group
documents
to
help
us
track
consensus
items
and
decisions
and
problems
with
the
documents
and
comments.
So
with
that
in
mind,
I
will
pass
the
baton
to
Lars
and
let
him
get
on
with
it.
F
J
Okay,
so
I
should
first
mention
that
there's
not
really
any
sort
of
globally
sanctioned
way
in
which
working
groups
can
use
github.
Every
group
is
sort
of
doing
its
own
little
thing
and
quick
is
maybe
more
aggressive
and
far
out
there
than
others.
There
was
a
github
working
group,
I,
don't
know
if
it's
just
concluded
or
if
it's
still
sort
of
in
the
last
stretch
that
is
trying
to
write
up
some.
You
know
text
about
what
you
could
do
there.
So
there's
an
RFC
and
I
can
try
and
find
it
and
forward.
J
It
will
be
in
RFC
there's
at
the
moment,
but
yeah.
So
basically,
we
barely
copied
a
lot
of
what
we're
doing
from
the
HTTP
working
group,
which
isn't
surprising
since
that's
the
main
workflow
that
we're
currently
carrying.
So
we
haven't
created
an
organization
on
github
to
start
off
with
which
is
called.
You
know,
quick
working
group,
and
it
has
a
web
page
that
is
also
hosted
on
get
up,
which
is
a
little
bit
more
friendly
for
newcomers
to
consume
compared
to
the
data
tracker
page.
J
So
there's
not
a
lot
on
there
but
moment
other
than
the
documents
and
how
people
can
contribute
and
so
on.
But
it
looks
a
bit
nicer
compared
to
like
the
data
tracker,
complicated
thing,
with
10,000
buttons
and
under
this
quick
organization
we
have
a
variety
of
repositories.
So
we
have
one
called
working
group
materials
that
we're
using
for
keeping
all
of
the
stuff
for
the
various
meetings
over
each
ITF.
We
make
a
folder
for
each
interim
meeting.
We
make
a
folder
and
we
sort
of
store.
You
know
the
PDFs
of
the
presentations.
J
So
we
don't
put
anything
in
here
that
isn't
a
working
group
adopted
that
was
a
decision.
Some
other
working
groups
allow
individual
draft
who
already
move
themselves
under
the
work
group
organization,
but
we
really
say
once
we
have
consensus
for
adoption.
That's
when
something
moves
and
Cynthia
tablets.
You
moves
repositories
now
between
organizations
or
from
an
individual
account
to
an
organization.
You
don't
necessarily
lose
the
history.
It
actually
moved
the
existing
repository
that
somebody,
like
you
know,
Chuck,
maybe
already
has
under
a
hypothetical
and
effort
working
group,
github
organization.
J
We
have
a
little
bit
of
a
wrinkle
that
we
have
a
bunch
of
draft
that
are
very
heavily
related
and
we're
calling
them
the
base
drafts
and
there's
a
few
of
those
and
they
live
in
one
repository,
because
when
we
started
github
didn't
let
you
move
issues
between
repositories
and
base
drafts.
We
often
found
ourselves
wanting
to
move
an
issue
that
somebody
had
open
against
one
document
to
another
document,
because
it
actually
you
know
the
text
moved
or
something
like
that.
J
These
days,
you
would
probably
really
make
it
different,
but
we
we
did
it
there's
a
readme,
and
this
is
basically
Martin
Thompson's
ID
template
that
Chavez
mentioned
that
lets.
You
write
documents
in
markdown
and
with
a
sort
of
continuous
integration
pack
and
automatically
submit
XML
to
the
ITF
and
Martin's
template.
J
Generates
this
nice
sweet
new
thing
that
has
clickable
links
that
generate
you
know,
divs
to
eat,
what's
called
the
editors
copy,
which
is
a
snapshot
of
the
current
github
version
against
the
last
ITF
craft,
and
so
on,
we're
using
it
quite
heavily
right
so
that
that's
basically
how
we're
organizing
our
materials?
It
is
not
nothing,
nothing
sort
of
earth-shattering.
J
We
have
some
teams
here
for
people,
so
it
we
have.
This
group
called
what
is
it
called?
Well,
we
have
two
chairs
right,
which
is
the
three
of
us
that
have
you
know
full
access
to
everything
we
have
a
group
of.
We
have
a
team
for
each
of
the
editors
of
the
different
drafts,
so
the
base
drafts
they
have
a
group
called
base
editors
who
have
write
access
to
that
repository
for
the
other
droughts.
We
have
Datagram,
editors
and
so
on
and
on
the
bottom
you
can
see
a
team
called
za,
Chinese
translators.
J
These
are
a
bunch
of
Chinese
people
that
are
translating
our
draft
into
Chinese
and
they
have
their
own
repository
with
their
work.
So
they
have
a
team.
This
is
mostly
so
that
we
can
do
access
control
so
that
you
know
only
editors
have
actually
write
access.
We
have
a
team
called
contributors
which
is
sort
of
pretty
big
as
30
members.
This
is
sort
of
basically
everybody
who
wants
to
be
there
gets
gets
added
to
that.
J
So
that's
been
growing
over
over
time
and
we
just
add
people
there
as
needed
and
also
if
you
want
to
have
somebody
else,
review
something
or
you
can
do
a
code
review
and
so
on
which
we
do
on
the
spec
and
you
can
also
assign
in
there.
So
that's
what
we
have
the
teams
for,
but
it's
something
that
we're
using
very
heavily
is
issues
so,
and
this
is
a
quite
a
busy.
So
this
is
the
base
draft
main
specs.
You
see
we
had
five
thousand
eight
hundred
something
commits
since
we
started
with
70
contributors.
J
We
had
139
released,
there's
quite
a
bit
of
activity
this
at
the
moment,
25
branches.
We
have
a
bunch
of
open
issues
and
you
see
that
those
are
also.
We
have
3600
issues
that
they
were.
You
know
worked
on
during
specifying
quick
and
we
have
a
bunch
of
labels
that
we
found
useful
and
again.
This
is
up
to
each
working
group
to
define
what
what
they,
if
I
useful.
We
have
labels
for.
J
You
know
which
draft
in
the
space
draft
suppository
is
the
issue
related
to
you
know
it's
a
HDPE
draft
in
variants
draft
you
know
recovery
and
so
on,
and
then
we
have
labels
that
have
a
different
different
colors
that
we
sort
of
use.
One
is
like:
is
this
a
editorial
issue
against
the
draft
meaning?
This
is
like
wording
change
or
something
that
doesn't
touch
standards
level
text?
J
Conversely,
if
something
is
label
design,
it
actually
changes
current,
the
you
know
if
the
current
consensus
on
operation
of
the
protocol,
so
somebody
wants
to
like
add
a
must-
or
you
know,
add
a
new
protocol
mechanism
or
something
those
issues
get
labeled
design
and
those
actually
get
been
last
called
in
the
working
group
and
progress
through
a
pipeline,
and
these
other
labels
down
here
has
concerns,
is
invalid
means,
discussion,
part
and
so
on.
Those
are
actually
what
some
that
that
we
used
for
keeping
track
of
where
stuff
is
in
this.
J
In
this
consensus,
pipeline
pull
requests
is
so
issues.
We
have
a
pretty
distinct
rule
that
so
issues
change
are
very
DIF
seconds
against
the
spec,
but
we
don't
allow
discussions
on
issues
with
on
on
pull
requests.
We
want
the
discussions
to
happen
on
the
issue,
that's
associated
with
the
pull
request.
This
is
again
just
by
convention
so
that
you
don't
have
to
like
always
remember
whether
something
was
discussed
in
the
issue
or
the
pull
request
and
what
we
use
pretty
heavily
as
this
project
board,
and
you
can
have
many
project
boards.
J
J
We
basically
put
it
in
triage
where
it
lives
in
this
column
here
and
the
chairs
monitor
the
discussion
on
the
issue
and
to
in
order
to
decide
whether
it's
editorial
or
design,
and
if,
when
something
is
that
thoroughly
design,
we
label
it
and
then
the
automation
that
puts
it
into
ether.
This
editorial
issues,
column
where
everything
is
green
editorial
or
the
design
issues
column
where
everything
is
blue
and
has
a
design
label.
J
J
So
this
is,
you
know
you
need
to
run
through
this
a
couple
times
before
it
makes
sense
in
your
head,
but
it's
been
very
helpful,
for
you
know
us
to
have
this
because,
as
I
said,
we
have
thirty
six
six
hundred
issues.
We
could
not
have
done
this
by
email
or
something
like
that
or
in
any
other
way
that
we
could
easily
think
of
and
and
since
everybody
is
sort
of
familiar
with
get
up
these
days,
it
also
lets
other
people
work
on,
does
pretty
easily
and
get
up
to
speed.
J
There's
a
few
other
little
things
that
we've
done.
We
have
obviously
we
ever
the
the
quick
working
group
mailing
list,
but
we've
also
created,
so
we
had
some
pretty
strong
pushback
that
people
said
you
know
we
don't
want
to
join
I,
don't
want
to
join
github
to
work
on
this
back.
I
want
to
work
by
email
and
that's
fine.
I
would
still
look
encourage
people
to
get
familiar
with
this.
You
know
it
might
be
helpful.
They
have
tunes
on
your
CV,
but
but
people
absolutely
want
to
use
email.
J
We
basically
created
another
mailing
list
called
quick
issues
and
that
is
subscribed
to
all
the
repositories
of
the
of
the
quick
working
group
organization,
so
that
mailing
list
is
subscribed,
and
so
every
time
somebody
comments
on
an
issue
or
does
anything
an
email
gets
automatically
generated
by
github.
That
goes
to
that
mailing
list,
and-
and
so
do
you
get
hundreds
or
maybe
sometimes
1000
emails
a
day
to
have
made
this.
J
But
if
people
want
to
subscribe
to
that
torrent
of
information,
they
can
then
reply
by
email
and
it
you
know,
ends
up
at
mega
top
server
and
get
up
extracts
the
stuff
and
puts
it
into
the
ticket
and
everybody
else
just
sees
it
on
the
web.
So
it's
possible
to
work
on
this
or
follow
the
quick
working
group.
If
you
don't
want
to
be
kind
of
to
this
at
least.
J
Okay,
so
nobody
who
is
not
nobody's
actually
implementing
quick
or
none
of
the
core
contributors,
but
we
have
a
bunch
of
people
who
are
sort
of
our
congestion
control
people,
for
example,
that
are
just
interested
in
making
sure
that
the
quicker
control
works
and
they
seem
to
like
email,
better
them
again
so
yeah,
that's
sort
of
all
I
can
really
remember
in
terms
of
what
we're
doing.
There
might
be
some
more
things
that
I'm
forgetting
now,
but
those
are
the
highlights.
So
it's
been
very
helpful.
J
Everybody
is
like
in
their
20s
or
maybe
early
third,
and
so
for
them,
business
oftentimes,
the
first
time
they
do
an
ITF
group
and
so
doing
it.
This
way
was
not
any
different
for
them
and
anything
else,
and
it
really
led
us
sort
of
keep
track
of
stuff
and
move
pretty
quickly,
which
was
nice
all
right.
J
There
I
mean
there's
no
friends
to
it
all
right
so
as
I'm
working
with,
but
what,
if
something
is
adopted,
you
can
work
on
it
in
this
in
its
form,
but
even
private
drafts
get
worked
on
github,
so
I,
don't
there's
no
legality,
that's
really
a
problem
here
we
have
I,
don't
know
where
we
have
it
it's.
Certainly.
D
Way
by
text,
I
think
the
main
point
here
is
really
that
anything
you
submit
into
the
git
repository
git
is
considered
ITF
contributions.
It's
up
on
the
participants
to
you
at
the
same
way
as
you
submit
the
draft.
If
you
submit
the
full
request
for
a
text
change,
you
are
promising
that
all
the
copyright
rules,
all
the
other
things
that
apply
to
the
text
also
applies
for
this
pull
request,
for
example,
or
the
famous
would
send
an
email
to
the
mailing
list.
It's
a
contribution
to
the
IDF.
Is
it
apply
in
the
same
way?
Yeah.
J
One
thing
I
should
also
mention
which
isn't
github,
but
it
goes
ahead
and
hand
with
it
is
that
we
have
a
very
active
slack.
It
started
out
as
a
slack
channel
for
the
different
teams
and
the
different
companies
and
organizations
that
I
implementing
quick.
So
it's
mostly
focused
on
interrupt,
but
it
turns
out.
You
can't
really
separate
interoperation
from
specification,
as
you
guys
probably
know
better
than
we
did
when
we
started
this
and
so
there's
in
the
beginning.
We
that
the
slack
channel
is
you
know
not
under
the
eye
TF
Nobel.
That
became
quickly
impractical.
J
These
people
just
start
discussing
all.
We
need
to
change
this
in
the
draft,
because
the
interrupts
failing
here,
and
so
we
decided
that
the
slack
channel
is
also
under
ITF,
not
well,
and
at
the
moment
we
have
like
233
people
subscribe
to
that
slack
channel.
So
it's
pretty
big
and
since
we're
using
the
free
instance
where
you
only
get
10,000
messages,
10,000
messages
on
that's
like
it's
basically
six
weeks,
so
it's
very,
very
active
and
there's
20
or
30
different
channels
and
and
most
of
the
discussion.
J
If
it's
not
on
github,
it's
in
the
slack
and
the
mailing
list
for
quick,
it's
surprisingly
quiet,
given
the
amount
of
work,
that's
happening,
and
it's
like
a
sort
of
open.
We
everybody
who
invents
the
chairs
can
can
you
know
join,
and
if
you
were
in
there
you
can
also
invite
others.
So
it's
not
like
lockdown
for
to
chair
approval,
but
that
that's
been
a
very
good
resource
as
well,
because
it
makes
collaboration
much
easier
specifically
for
these
20
30
year
old
engineers
that
don't
really
understand
email
slack,
that
you
understand.
J
So
if
you
guys
are
thinking
about
using
github
for
for
NFS
right,
that
would
sort
of
maybe
start
slow
and
move
the
documents
there
and
try
and
then
maybe
for
a
new
document
that
you're
starting
from
scratch.
Try
to
you
know,
work
with
github
issues
instead
of
email
issues
and
try
it
out
sort
of
maybe
with
one
document
or
so
and
see
how
it
works
for
you
and
then,
if
it
works,
well
migrate.
More
of
your
work
over!
Oh,
it's
just
a
small
document,
first
menu,
so.
E
I've
been
doing
that
with
my
personal
documents
and
you've
shown
us
an
enormous
amount
of
process
here,
which
we
probably
won't
need
because
we're
a
much
smaller
working
group,
but
it
seems
like
managing
our
working
group
documents
in
a
common
get
repository
might
be
useful.
I
know
that
we're
looking
at
tackling
a
very
large
effort
with
renovating
RFC
56
61,
which
is
almost
seven
hundred
pages,
so
this
might
be
helpful
there.
A
D
J
Markdown
is
the
one
advantage
that
it's
it's
a
bit
nice
you're
looking
on
the
web,
because
it
has
some
magic
that
can
make
markdown.
You
know,
diffs,
look,
look,
okay,
I,
don't
know
how
well
that
works
for
XML
and
and
it's
sort
of
easier
to
do
copy
and
paste
with
with
markdown
compared
to
XML
formatted
text.
But
you
certainly
can
do
it,
but
excellent.
M
F
J
Outwards
so
some
people
have
have
done
exactly
that.
So
if
there's
some
automation
available,
I,
don't
know
if
it's
fully
automated,
but
what
they
have
done
is
basically
they
you.
You
submit
a
0-0
of
your
this
document
that
is
just
the
XML
to
mark
down
conversion.
So
there's
no
other
changes
and
then
you
take
that
as
the
baseline
and
then
move
forward.
But
again
it's
not
for
you're
gonna
student
side
I'm,
not
touching
that
word
qualified
to
write,
NFS
text.
E
F
E
E
N
E
Okay,
this
is
a
directory
performance,
scalability
I'll,
try
to
be
quick
and,
in
fact,
I.
Don't
think,
there's
much
here
to
present
other
than
just
a
problem
statement,
so
I'm
sure
yeah,
all
of
all
of
us,
as
NFS
practitioners
have
received
anecdotes
from
users
and
administrators
over
the
years
about
the
problems
with
managing
applications
that
want
to
use
directories.
This
is
just
a
flat
namespace,
that's
quite
problematic,
it's
problematic
for
POSIX
file
systems
in
general,
but
especially
so
far
in
the
fast
for
a
variety
of
reasons.
E
We
also
have
this
problem
where
the
server
can
vary,
the
the
set
of
entries
in
a
directory
based
on
the
permissions
that
the
requesting
user
has
I,
don't
know
if
that's
probably
today's
implementation,
but
I
I'm,
told
that's,
that's
going
to
be
an
issue
eventually
I
guess
SMB.
Does
this
and
I'm
sure
not
an
expert
there?
Yes,.
L
E
It
creates
directors
and
files
one
at
a
time
and
and
it's
a
completely
synchronous
with
the
server's
back
in
storage
and
that's
increasingly
a
problem
for
these
data
center
area
file
systems
where
you've
got
multiple
copies
of
every
block
behind
an
NFS
server
target
and
there's
a
you
know:
there's
a
replication
protocol,
that's
going
on
back
there
that
makes
that
makes
these
things
very
slow.
So
they
they
don't.
E
They
are
not
very
fast
with
individual
creation
operations,
but
if
you
send
them
a
bunch
of
operations
in
parallel,
then
they
seem
to
do
pretty
well,
they
will
scale
and
aunt
are
just
sort
of
flies
in
the
face
of
that
whole
paradigm.
So
I
think
we
need
it.
We've
been
telling
people
we
we've
been
telling
people
you
know.
E
E
E
E
A
A
E
Delegation
here
so,
let's
take:
let's
take
them
all
decline
cases
out
of
this
consideration
and
think
about
only
the
single
client
the
single
client
is
going
to
when
it
creates
a
file
in
a
large
directory,
is
going
to
create
that
file,
and
then
it's
going
to
invalidate
its
its
directory
cash
for
that
directory,
so
it
doesn't
have
to
do
it.
Doesn't
why
not
I
mean
the
server
is
completely
in
control
of
the
of
the
cookies.
A
Well,
there
is
a
provision
in
your
notification
where
you're
notified
of
changes
and
the
assumption
is
that
is
probably
enough,
but
it
might
be
enough.
Maybe
we
need
to
what
is
it
made
them,
not
a
valid
assumption,
saying
everybody
can
change
every
cookie
on
every
operation
and
then
that's
a
rule.
Wait
ordered
toward.
E
The
fifth
bullet
down
is
sort
of
in
that
area
it,
but
it
takes
a
different
approach.
It
would
be
where
the
client
asks
a
server
for
a
range
of
cookies
that
it
can
use,
and
then
it
is
free
to
do
creation
operations
without
contacting
the
server
again
until
it's
ready
to
to
find
out
what
the
creation
em
time
and
see
time
is
of
each
end
of
an
individual
file.
E
Although
that's
not
in
directory
entries,
the
the
client
you
know
being
able
to
this
I
guess
we
could
think
of
this
as
a
reverse
server
offload,
where
we
are
asking
the
server
for
a
set
of
resources
that
the
client
can
use
at
its
own
whim
and
then,
when
the
client
is
done,
it
can
write
them
all
back
and
and
get
get
back
the
unused
ones
and
it
can
tell
the
server.
Oh
I
actually
use
these
for
these
for
these
files
and
it
could
be
entirely
asynchronous
with
applications.
E
E
E
E
C
K
A
A
This
is
a
proposal
or
pre-proposal
or
something
about
perhaps
improvement
in
the
parallel
the
parallelism
of
director
operations.
It's
not
well.
There
were
small
points
of
contacts
with
what
Chuck
I'm,
not
looking
at
anything
on
time.
Just
look
at
general
performance
I'm,
looking
at
situations
where
we
have
services
or
processes
a
lot
of
work
in
parallel
in
parallel,
and
maybe
maybe
director
layouts
is
one
way
to
do
that.
A
A
So
the
idea
is,
you
have
a
PMF
is
like
director
Eli
out
land,
and
this
is
directed
in
my
case,
my
suit
in
flirtation
case,
in
the
case
of
the
case
of
cases
where
we
have
a
cluster
server,
and
you
naturally
put
different
directories
on
different
servers
and
you
get
a
lot
more
parallelism
if
you
simply,
by
allowed
the
client
to
get
a
layout
Santa
hey.
If
you
want
to
do
something
to
this
directory,
go
here
without
stripping
them
that
provides
granular
handle
handling
in
the
parallelism.
A
It's
entered
in
its
into
a
directory
parallelism
within
the
same
file
system
rather
than
in
trajectory
parallelism,
as
in
the
striping
case
now,
I
say
peanuts
has
to
like,
because
there's
some
differences
income
as
well
as
coming
out
the
coming
out
is
if
primer
responsible
for
function
has
given
to
another
server
to
lay
out
the
store
whose
mineral
PFS
is
the
role
of
a
server.
He
can.
You
can
perform
the
function
if
requested,
but
typically,
clients
will
go
to
the
guy
who
holds
a
layout.
A
There's
no
striping,
that's
mainly
because
I
couldn't
see
how
to
do
striping
and
I've
seen
what
problems
were
the
things
we
never
ended,
and
we
never
attempts
to
do
this
with
striping
didn't
work
out
too
well,
but
your
lists
have
read
and
write
labs
and
you
have
a
single,
unlike
the
case
of
peanut
vents
for
data,
we
would
only
have
a
single
lab
type
resin,
multiple
ones,
so
the
wife
is
worth
explaining.
First
of
all,
it's
a
relatively
easy
way
of
improving
directory
power.
Parallelism
I,
put
the
it's
quite
easy.
A
A
Think
maybe
the
best
way
to
do
that
is
the
rule
is
the
way
that
Chuck
said
is
some
offload
treat
this
as
an
operation
that
you
give
to
the
server
as
a
whole,
and
let
him
tell
you
when
he's
done,
I'm
assuming
enough
jobs
to
avoid,
then
they
need
two
parallel
eyes:
single
single
command.
So
that's
not
different
from
my
approach,
my
talk
and
Chuck's.
A
Now
many
attempts
to
Judas
to
do
striping
him
have
not
resulted
improved
because
his
the
port
director
script
had
not
worked
out.
I'll
talk
about
that
in
the
next
slide
that
was
based
on
my
post.
I
just
saw
lots
of
messages
on
the
citizens
tuned
it
out
and
somehow
they
all
the
process
all
died,
and
it
was
my
what
I'm
swimming
is
that
this?
A
Probably
that
if
someone
was
interested
in
resurrecting
that
we
could
resent
red
but
I'm
not
and
the
other
issue
is,
but
we
have
delegations
and
notifications
in
RFC
56:51,
but
then
then
not
implement
and
I'm,
not
I'm,
not
sure
why
and
I
think
I'm
I'm
not
asking
if
someone
knows
and
can
speak
up
I'm.
Also
in
asking
for
discussion
of
this
on
the
list
we
have.
This
feature
is
because
it
was
wrong,
respect
wrong,
thought
out,
not
the
right
thing.
If
so
could
improve
it.
A
A
Now
we
have
had
people
as
I
said
in
work
on
striping
of
directories,
but
problem
is
it's
hard
to
strike
because
there's
no
obvious
Cortlandt
for
file
offset
there
hashes
you
might
use,
but
it's
hard
to
get
agreement
of
the
client.
The
server
the
on
disk,
FS
and
I
saw
this
discussed
endlessly
and
just
figured.
Maybe
it's
not
worth
it
and
is
that
that
kind
of
thing
is
necessary
performance
of
extremely
large
directories.
But
how
common
is?
A
Are
there
more
comment
than
I
think,
but
it's
also
quest
of
how
come
and
they
should
be-
and
maybe
this
is
that's
just
a
normal
thing.
Maybe
would
there's
no
no
real
percentage
in
trying
to
make
those
performance
well,
but
this
is
a
good
way
to
provide
within
directory
parallelism,
that's
better
than
only
provide.
A
A
Now
possible
alternatives
to
this
is
well
directory.
Delegations
and
chukkas
men
mentioned
directed,
read
delegations
but
and
I
think
I
think
much
of
what
he
talked
about
makes
sense,
but
my
sense
of
how
things
work
is
that
if
we
can't
get
people
to
invest
the
effort
in
directory,
read
delegations,
it's
hard
to
believe
that
we
get
the
people
to
invest
in
director
of
right,
elevations
these
they're
so
much
more
complicated.
A
We
need
to
understand
if
the
expectations
for
director
delegations
are
better
and
or
is
there
a
problem
with
director
dilation
that
we
can
address
now
Chuck
in
our
and
I
on,
our
discussion
talked
about
gee.
The
idea
that
well
you've
changed
this
one
file
and
then
potentially
every
cookie
for
every
file
might
change.
That
might
be
a
simple
thing
that
we
could
dress
with
an
attribute
bit
saying:
okay,
the
client
may
tell
you:
okay,
yes,
I'll,
give
you
notifications,
but
I'll
agree
that
I'm
not
change
cookies
existing
files.
A
Without
that
that's
worth
doing,
we
should
think
about
that.
So
other
checkers
try
to
start
start
discussing
process
and
I
think.
Maybe
we
should
have
a
discussion
process
which
is
focused
on
direct
group
performance,
not
only
on
the
cases
he's
talked
about,
but
what
we
can
do
in
general
to
improve
performance
is.
This
is
an
area
if
this
area
of
interest
for
everybody.
M
Right
this
time,
hannes
I,
don't
know
pre
necessarily
that
file
delegations
are
not
performing.
A
All
right,
okay,
all
right!
That's
what
people
tell
me
I
haven't
I,
think
I
think.
Maybe
it's
not
a
question
of
the
not
being
performance,
but
there
were
a
lot
of
environment
where
they're,
not
not
not
all
that
useful
I.
Think
that's
what
I'm
hearing,
but
IIIi
don't
speak
of
that
as
do
direct
knowledge,
but.
A
Basically
I
think
there's
a
lot
of
text
here,
but
the
basic
thing
is
you
just
find
this
Martin?
Is
this
this
this
layout
type?
It
works.
You
get
a
layout
and
I
think
he,
because
the
room
of
the
way
the
rules
for
working
you,
you
get
your
own
format
for
the
layout,
get
request,
and
that
return,
because
I
think
that's
providing
for
the
way
that
we
live.
We
we
just
defined
for
every
layout
I.
He
gets
his
own
version
of
that
previously
opaque
structure.
I
think
you
would
not
have
struggled
to
this.
M
This
sounds
like
you,
you're
pushing
the
two
issues
together,
you
described
a
use
case
where
you
want
to
have.
Basically,
this
directory
is
located
on
this
cluster
server
or
that
cluster
server.
That
has
nothing
to
do
with
that.
Your
delegations
that
you're
describing.
A
M
Just
don't
know
what
it
means
to
be
different.
You,
okay,
so
you
might
say
the
read
layout.
You
can
go
to
cluster
B
it'll
be
slower
to
access.
If
you
want
rights,
but
it
just
doesn't
make
sense
to
me.
Okay,
you
and
your
money.
It
sounds
like
you're
trying
to
force
it
into
the
layout
definition,
because
part
of
it
matches
I.
M
A
A
A
Yes,
okay,
so
I'm
looking
to
assess
working
group
interest-
I'm,
not
here
a
lot
of
interest,
but
I
am
having
an
interest
in
the
general
that's
been
here.
That's
in
the
general
problem
and
I
think
we
have
to
figure
out
how
we
can
discuss
this
on
the
list.
So
I
think
I.
Think
Chuck
has
brought
up
some
interesting
issues.
I
think
we
should
should
have
a
general
discussion
of
that
and
see
what
what
the
interest
is
and
I
think
oh
I
wouldn't
make
the
case.
K
A
L
G
A
G
N
H
H
G
The
minimum
we're
talking
what
we're
talking
even
extra
byte
is
okay,
but
already
people
are
talking
to
order
of
market
about
that,
so
that
I
think
we
become
a
factor.
Every
storage
system
will
have
to
do
the
best
to
reduce
the
data.
So
the
memory
of
servers
increases
because
the
current
we
can
use
any
VM
in
devices.
G
Also,
there
are
new,
faster
and
here
a
fabric
interconnect
available
as
well.
So
that's
that's
the
we
stand
today.
That's
why
it's
important
to
try
to
reduce
the
data
is
not
just
possible
before
pushing
to
Rebecca
there's
this
problem.
There
are
new
data
reduction
methodology,
algorithms
and
compression
enhancement
to
improve
data
reduction.
G
Fixed
blocks
because
you
got
unboxing
opportunity
to
compress
better
depending
on
compressibility
of
the
data.
There
is
also
a
lot
of
work
in
the
compression
hardware,
for
example
using
microphone,
zipline
or
interpreting
chips
or
any
other
FPGA.
Many
vendors
already
use
this
rated
equipment.
The
data
reduction
require
larger
memories
and
larger
number,
of
course,
and
of
course
we
looking
at
the
world
right
now.
The
number
of
course
explored
the
number
of
new
servers
with
both
a
lot
of
memory.
It's
not
uncommon
to
have
terabytes
of
memory,
ram
memory
right
and
handing
the
pod.
G
G
G
Other
ways
to
use
actively
NFS
server
data
reduction
engine
operates
the
power
system
block
typically
8k
assumingly
nodes,
but
could
be
other
blocks.
There
is
analytics
another
analyst
data
regarding
compression
of
different
types
of
files
that
can
we
can
improve
the
production
engines
in
the
array
efficient,
meaning
that
you
can
you
know,
even
if
there
is
unknown,
that
compression
ratio,
it's
known
what
type
of
compression
is
used
and
that
there's
no
way
to
take
advantage
of
this
data
there's
the
application
we
are
characteristics
are
not
visible.
G
There
is
a
new
draft
extending
the
pinochle
scars
into
any
immediate.
By
presenting
before
you
propose
to
use
main
attributes
in
the
same
conduct,
the
first
we
expanded
the
reduction
to
apply
to
thinner
this
card
from
any
phase
before
there
are
questions.
I'm
sorry
I
put
skin
I
cannot
see
the
question,
so
I
could
address
them
at
the
end,
we
did
a
way
to
communicate
the
direction
characteristics
from
client
to
server
posted
few
optional
attribute
the
compression
we
do.
G
G
Cases
the
first
this
case
is
a
normal
NFS
server,
which
talks
to
a
reduction
engine
which,
generally
inside
the
storage
segment
right
to
the
block
storage.
The
assumptions
for
use
case
is
the
NFS
server
can
communicate
to
many
attributes
as
better
beta
directly
the
reduction
engine,
meaning
that
the
client
will
transfer
the
attributes
to
the
server
and
the
server
we
in
Kevin
protocol
to
connect
and
transfer
this
information
to
the
Box
re-identify
forest
and
blocks
the
file
can
associate
with
block.
G
So
if
there
is
information
available
about
how
compressible
a
certain
file
is
that
can
also
be
associated
to
for
each
of
the
blocks
belong
to
a
same
file
in
this
discussion
between
NFS
server.
The
other
case
is
a
pin
occurs
over
nvm
McCarthy.
In
this
case,
you
can
see
that
client
can
talk,
server
can
talk
to
the
engine,
but
it
also
possible
that
using
the
nvme
channels
and
scurry
channels
to
transmit
that
information
directly
from
the
client,
the
reduction
engine
data
reduction,
main
attributes
are
accessed
by
open
operation.
G
G
These
attributes
are
intended
for
data
needed
for
applications
rather
than
by
an
NFS
client
implementation.
So
scatter
communication
from
the
application
directly
to
learn
becker
implementers
are
strongly
encouraged
to
define
the
new
data
induction
attribute,
as
recommended
attributes
not
mandatory.
So
and
then
a
client
should
check
the
top
if
it
exists
or
not
hidden
metadata
and
should
be
retrieved
by
the
NFS
server
pass
down
to
the
engine.
So
that's
outside
of
the
scope
of
me,
husband
that
we
are
looking
at.
G
We
propose
enhancements
to
the
NFS
protocol
operations
to
allow
recommend
attribute
to
be
queried
by
clients.
New
attribute
bitmap
for
data
reduction
recommend
an
image.
Support,
recommend
at
become
a
attributes,
may
be
examined
and
chained
by
normal
get
upper
settle
operation.
There
was
a
conclusion
from
last
time
and
I
think
it
positive
and
repent
I
open
the
command
that
it
can
be
modified
by
users.
G
By
users
and
stored
with
the
fast
an
object,
both
files
and
director
again,
what
will
ask
this
time
would
I
would
like
to
opinion
and
the
group
about
editing
such
data
induction
tributes
to
the
NFS
before
protocol
has
named
attributes
or
any
app.
That
would
be
the
current
contention.
Should
this
become
working
group
item?
Should
we
first
define
the
protocol
changes
support
here
before
adoption?
G
G
G
G
A
G
G
G
A
G
L
Think
it's
useful
to
raise
awareness,
and
you
know
we're
certainly
watching
your
presentation,
but
there's
only
a
couple
of
handfuls
of
people
here.
The
working
group
got
large
is,
you
know,
needs
to
be
engaged,
we
need.
We
need
something
more
concrete,
your
your
ideas
are
interesting,
but
it's
not
up
to
the
level
of
saying
yeah.
We
should
proceed.
E
Why
are
you
asking
about
named
attributes
I'm,
not
sure
that
the
takeaway
from
the
last
presentation
was
that
we
rejected
extended
attributes
I,
think
we
said,
use
regular
Fator
for
style
attributes.
Don't
use
extended
attributes
for
this,
so
so
I'm
not
sure
why
you're
coming
back
and
saying
well
should
I
use
named
attributes
I
think
we
told
you
what
would
be
acceptable
for
this.
G
E
E
L
Right,
let's
get
started,
I
have
seven
slides
here:
I'm
gonna
try
to
move
it
quickly,
perhaps
seen
some
of
them.
I
forwarded
these
slides
to
the
mailing
list
about
half
hour
or
45
minutes
ago,
so
you
can
see
them
there.
A
brief
introduction.
The
flush
extension
is
a
proposed
extension
to
the
RDM
AP
and
DEP
protocols,
collectively
known
as
I
warp.
It
supports
three
new
operations,
flush,
which
is
a
placement
guarantee
for
remote
visibility
and
remote
persistence,
a
little
stronger
and
a
2-way
acknowledgement
of
the
state
of
data
that
you
have
previously
written.
L
These
three
operations
are
present
in
our
drafts.
An
original
second
there's
a
similar
effort
underway
in
the
InfiniBand
trade
association.
This
is
the
organization
that
owns
InfiniBand
and
rocky
specifications,
it's
being
published
as
an
annex
the
effort
in
the
IB
ta
and
the
I
work
proposal
that
I
published.
We
published
the
draft
or
they're
compatible
semantics
they're
slightly
different
in
that
the
IB
T
extension
does
not
currently
include
verify,
but
it's
been
discussed
there.
It
was
actually
skipped
in
the
interest
of
time.
L
L
F
L
The
time
the
fundamental
requirements
and
concepts
it
made
it
sort
of
a
draft
of
a
protocol,
but
it
didn't
jump
to
that
conclusion.
At
that
time
the
idea
was
to
lay
out
the
requirements
and
concepts
only
and
to
begin
the
draft
of
a
protocol,
but
not
actually
specified.
There
was
a
lot
of
consensus
on
it.
I
got
a
lot
of
traction
on
it.
I
presented
it
a
storage
developer
conference
at
IETF
and
RDMA
conferences.
L
It's
been
the
approach
or
choice
for
everybody
and
there's
been
significant
work
offline,
including
that
ivt
effort
leading
up
to
that
I
think
it
was
finally
time
to
update
the
document
in
March
20
20
last
month,
four
years
later,
the
authorship
from
multiple
companies.
This
time
they
major
I,
were.
L
Updated
requirements
and
concepts
basically
refreshing
them
up
over
the
Peters
and
added
a
specific
protocol
proposal
still
to
be
written
and
sort
of
tea,
biddies
and
the
existing
document.
The
ordering
rules,
a
local
interface
and
local
processing
sort
of
round
out
the
discussion
to
be
discussed
today
is
not
to
decide
but
whether
it's
appropriate
to
adopt
as
and
manifests
before
working
group
item.
I'll
tell
you
why
the
RDP
workgroup,
what
formerly
owned
I
warp
is
no
longer
active.
It
was
closed
down
several
years
ago.
L
L
Unfortunately,
that's
a
large
kitchen,
all
group,
without
a
lot
of
specific
RDMA
focus
or
specific
arguments.
Bertie's
it
tends
to
have
quite
a
few
areas
represent.
Second
candidate
was
an
independent
proposal.
I
would
just
continue
to
publish
it
as
an
independent
draft
and
it
would
enter
the
independent
RFC
stream.
It's
certainly
possible,
but
it's
undesirable
to
update
an
existing
IETF
spec
like
the
I
warp
suite.
L
In
this
way,
it's
kind
of
strange
as
a
matter
of
fact,
the
third
one
that
was
proposed
in
the
meantime,
was:
let's
just
do
it
in
nfsv4,
it's
a
relevant
area,
it's
in
transport,
its
storage,
related,
etc
and
there's
a
lot
of
our
team,
a
expertise
here.
So
it
made
a
lot
of
sense
to
me
and
others.
I'm
gonna
make
a
tentative
recommendation
that
the
NFS
before
we
adopt
this.
J
L
There
is
significant
NFS
before
work
group
activity
in
this
area.
Rpc
over
our
DMA
is
the
obvious
one.
It's
an
existing
effort,
it's
currently
being
extended
as
RPC
Rd.
Maybe
two
there's
no
actual
interdependence
between
these
two
possible
areas,
but
there's
a
strong
relation
between
both
argument
about
data
over
RDS
or
storage
data.
L
Second,
the
pmfs
nvme
over
fabric
use,
possibly
including
PCI
Express
peer-to-peer
transfers,
suriname
david
black,
gave
that
presentation
earlier
in
this
meeting.
I
was
glad
to
see
it
and
third
is
switch.
I've
just
discovered
is
the
push
mode
layout.
That
Christophe,
published
back
in
2017
is
actually
closely
related
to
this
new
and
via
Mueller
fabric
I'm.
L
You
know
these:
these
are
all
related
activity
that
have
been
in
the
nfsv4
working
group
over
time.
So
you
know,
there's
already
my
expertise
and
there's
already
made
development
activity,
there's
strong
storage,
relevance
to
a
persistent
memory,
equipped
server
on
many
NFS
servers
or
adopting
persistent
memory
in
their
storage
panoply.
J
L
That
I
meant
is
strongly
related
to
remote
shared
memory.
That's
not
actually
storage
related,
but
it's
closely
related
in
terms
of
the
fabric
requirements
yep.
So
there's
current
nfsv4
working
group
activity
on
this
document
I
see
seed
the
NFS
before
working
group
after
publishing
it
last
night.
So
there's
been
some
informal
discussion
about
this
with
I
must
say
broad
support.
L
L
Your
chairs,
Area,
Director
and
other
IETF
processes
all
need
to
be
in
agreement
on
this,
and
we
would
update
the
draft
instead
of
us
in
the
middle
graph,
but
obviously
as
a
draft
NFS
before
or
draft
IETF
and
fests
before
retitled
I
would
propose
something
along
the
lines
of
the
name.
Our
team,
a
placement,
extensions
which
I
think
most
accurately
describes.
A
L
It
with
to
date,
everybody
has
been
strongly
in
favor.
There
were
a
couple
of
messages
that
went
to
the
mailing
stall
ready
on
this
one
from
david
black,
for
instance,
I'm
just
trying
to
remember
who
else
chime
in,
but
there
were
a
couple
of
public
statements
maidens
deported
this
idea.
I
haven't
heard
anybody
say
it's
a
bad
idea,
so
I
would
expect
it
would
be
fairly
easy
to
determine
whether
we
have
consensus.
L
D
L
First
question:
will
it
bring
in
more
people
I
think
the
answer
is
yes,
the
there.
My
colors
in
particular,
are
all
representative
of
companies
that
have
I
work
products,
there's
one
additional
co-author
who
is
agreed
to
to
jump
into
the
next
revision
who's.
Also
an
artwork
developer,
so
I
would
expect
them
to
join
fun
and
the
this
the
discussion
certainly
and
I,
don't
know
who
else
might
be
lurking
in
the
wings,
but
sure
there
are
other
RDMA
interested
parties
who
previously
may
not
have
been
participating
in
benefits
before
but
would
hopefully
join
the
discussion.
G
B
L
L
D
N
G
E
A
N
N
N
D
Yeah
I
mean
so
I
mean
from
my
perspective.
It's
fine
for
you
to
discuss
it,
but
I
think
you
need
to
keep
in
mind
I.
Take
your
new
work.
I,
actually
think
you
need
to
think
about
how
you
prioritize
and
etc,
and
you
still
have
a
couple
of
work
to
write
them
up,
open
and
ongoing
and
and
I
don't
so
it's
it's.
Those
aspects
need
to
be.
You
thought
about
them
and
I
think
really
it's
not
just
gobble
up
more
work
because
that's
not
gonna
be
endear
you
to
me
more,
but
we're
the
other
opposites.
D
D
N
No
okay,
I
had
taken
minutes
in
ether
pad
I'm,
going
to
produce
a
document
and
upload
it
to
the
our
doc
doc
site
in
the
meeting
minutes
and
stuff
like
that,
if
you
guys
take
a
look
at
it,
that's
anything
in
there,
I'm,
missing
or
I
should
add
or
correct.
Please
do
I'm
not
really
trusting
etherpad
stuff,
but
it
started
working
again
for
me
anyway.