►
From YouTube: IETF111-NFSV4-20210730-2130
Description
NFSV4 meeting session at IETF111
2021/07/30 2130
https://datatracker.ietf.org/meeting/111/proceedings/
A
B
C
B
Okay
yeah,
but
I
don't.
I
don't
know
who
you
are
but
but
you
didn't,
I'm
not
seeing
my
notewell
deck.
B
D
B
So,
if
you're
here
yesterday
welcome
to
the
nfs
version
for
working
group
part
two,
this
is
ietf
111
in
case
you
were
unaware
of
where
you
were.
We
have
gone
through.
Half
the
agenda.
Sorry
dave!
I
just
want
to
get
this
out
of
the
way,
because
I
gotta
tell
the
note.
Well,
we
have
four
remaining
sections
to
finish
today.
Chuck
lee
will
be
leading
most
of
them
on
signing
rdma
over
rpg
over
rdma
docs
nfs
version,
four
enabling
use
of
quick
and
discussion
on
future
work.
B
But
what
I
didn't
get
to
yesterday,
we
mentioned
in
passing
when
you
signed
up
for
the
itf
virtual
meeting,
you
will
have
passed
through
in
the
registration
time.
The
note
well
document
a
reminder
of
itf
policies.
B
These
cover
patents
and
code
of
conduct,
and
I,
if
you
are
unaware
of
them,
please
read
the
documents
they
are
make
for
good
reading
participating
in
the
ietf.
You
agree
to
follow
it
process
policies,
but
the
most
important
things.
If
you're
aware
that
any
itef
contribution
is
covered
by
patents
or
patent
applications
that
are
owned
or
controlled
by
you
or
sponsor,
you
must
disclose
that
fact
or
not
participate
in
the
discussion
and
you
are
being
recorded,
written
audio
photograph
records
of
the
meetings
may
be
made
available
and
the
personal
information
that
you
provide.
B
B
Am
I
am
taking
notes
with
etherpad,
I
sent
out
a
link,
it's
on
the
meeting
material
site
or
in
the
agenda
site.
If
you
want
to
join
in
there
and
throw
in
notes,
I
think
it
allows
multiple
writers.
Let
me
get
out
of
here.
Okay,.
A
So
I've
requested
to
share
share
my
slides,
my
slides
brian.
I
need.
A
That's
where
there's
a
lot
of
interest,
some
of
the
other,
the
other
two
slide
decks
that
I
have
are
probably
less
interesting
to
people,
and
so,
if
I
get
through
these
now,
then
whoever's
not
really
interested
in
the
other
stuff
yeah
I
haven't,
I
haven't
started
the
slideshow,
yet
you
should
see
them
now,
anyway,
I'm
going
to
start
with
a
quick
first,
so
that
folks,
who
are
not
interested
in
some
of
the
other
items,
can
leave
as
they
need
to.
A
So
I'm
just
putting
up
the
screens
for
people
who
don't
know
what
quick
is.
I
see
lars
in
the
chat
room.
He
can
certainly
correct
me
if
I've
gotten
anything
wrong
and
I
encourage
them
to
speak
up.
A
A
A
Building
an
rpc
on
quick
transport
and
then
nfs
on
quick.
A
We're
not
really
clear
about
who
wants
rpc
over
quick
now
that
we
have
tls
support
for
our
pc.
That's
probably
the
main
benefit
that
rpc
would
see
with
kwik
is
is
a
transporter
security.
There
are
some
other
advantages
as
well.
They
might
not
be
available
immediately,
just
because
quick
is
not
quite
done
yet.
There
are
some
extensions
that
are
that
are
looking
kind
of
interesting
that
might
be
interesting
to
rpc.
A
We
don't
have
stakeholders
or
our
interested
parties
who
are
grabbing
our
callers
and
rattling
us
and
saying
you
know:
gotta
have
quick.
Now
there
is
some
interest
from
some
areas
of
the
linux
community
that
are
very
focused
on
smb,
because
microsoft
is
focusing
on
smb
on
quick.
A
So
if
anybody
else
knows
of
a
very
strong
usage
scenario
for
quick,
a
particular
user
who
who
might
have
a
use
for
it,
please
let
us
know
post
on
the
mailing
list
or
mention
it
here
in
the
chat
room.
A
So
I'm
I'm
kind
of
a
quick
newbie,
but
there
are
some
things
that
we've
sort
of
been
looking
at
and
and
and
salivating
a
little
bit
over
one
so
and
I've
listed
them
here
on
the
slide
and
one
of
the
things
that
quick
would
provide
us
is
having
more
than
one
reliable
stream
per
connection,
and
in
that
way
we
could
probably
separate
the
forward
and
reverse
direction.
Rpc
transaction
streams
so
forward
would
be
on
one
stream
and
reverse
would
be
on
another.
A
We
can
also
do
things
clever
things
like
creating
more
than
one
stream
per
connection,
because
each
stream
has
its
own
head
of
cube,
locking
so
blocking
on
one
stream
would
not
would
not
block
a
progress
activity
on
on
another
stream.
So
anyway,
that's
that's
one
of
the
technical
details
that
we'll
need
to
drill
into
on
the
mailing
list,
as
we
absolutely
as
we
drill
into
quick
a
little
deeper.
A
Another
thing
that
quick
might
offer
rpc
is
better
loss,
recovery,
congestion,
detection
and
control
is
more
robust
than
it
is
for
tcp.
I'm
not
sure.
That's
all
that
interesting
for
nfs,
just
because
that
typically
runs
on
on
data
center
networks.
Originally
of
course,
20
years
ago,
when
we
created
nfsv4,
we
had
the
the
lofty
goal
of
making
nfs
a
wide
area
file
system,
and
that's
where
this,
this
kind
of
advanced
error
and
congestion,
detection
and
control
might
be
advantageous
over
wide
area
networks.
A
Also,
it's
it's
pretty
well
known
that
rpc
sec,
gss,
doesn't
doesn't
obscure
the
the
rpc
headers.
Those
are
in
the
clear
once
we
have
tls
or
quick
with
with
with
a
transport
layer
security,
we
would
be
able
to
obscure
the
entire
rpc
request.
Each
of
them
would
be
completely
private
well
and
for
tls
we
still
have
to
have
a
tls
probe
in
the
clear
and
with
quick
tls
is
built
in,
and
so
that
wouldn't
the
client
would
need
to
probe
tls.
It
would
just
be
there:
it's
not
opportunistic.
A
On
and
the
fact
that
it's
always
on
is
might
be
problematic
for
some
folks,
just
because
when
they
select
quick,
they
would
see
a
pretty
immediate
loss
in
performance
over
over
tcp
without
transport
layer
security.
A
So
I
think
we
have
to
set
that
expectation
for
people
who
who
want
to
see
quick
in
in
in
our
rpc
in
our
sorry
in
our
internal
consumers.
Is
that
quick
is
going
to
have
this
performance
hit
to
it
pretty
immediately.
A
The
other
issue
is
a
similar
issue
to
the
one
we
have
with
tls,
and
that
is
that
a
lot
of
the
mechanisms
that
we
have
are
all
in
user
space.
We
I
know
for
kernel,
consumers
like
in
kernel
nfs.
We
need
to
do
something
about
implementing
quick
in
in
kernel.
A
I
know
windows
is,
is
ahead
of
us
on
this
way
ahead
of
us
on
this
ahead
of
us,
meaning
linux,
I'm
speaking
with
my
linux
developer
hat
on,
so
I
don't
know
if
someone
else
knows
about
the
the
status
of
the
internal
implementation
in
linux,
I
don't
think
there
is
one
today
the
loss
of
performance
someone's
asking
in
the
chat
room
about
loss
of
performance.
A
A
D
G
Nick
banks
for
microsoft,
main
developer
for
ms
quick,
the
library
that
windows
uses
when
you're
seeing
lots
of
performance
like
I'm,
asking
specifically
what
the
the
target
network
conditions
are
here,
because,
if
you're
going
over
the
internet,
all
tests
we
have
the
internet
is
limiting
factor,
not
the
the
cpu
costs
of
encryption.
So
that's
why
I
was
asking
like
when
you're
talking
about
loss
of
performance
here.
What
is
the
scenario
that
you
expect
to
lose?
So
no,
I
lose
performance.
Yes,
cpu
is
spent
on
encryption,
but
that's
not
necessarily
the
bottleneck.
A
Certainly
in
scenarios
where
you're
talking
about
the
traditional
gigabit
ethernet,
we
wouldn't,
we
wouldn't
expect
that
the
encryption
cost
would
be
would
overcome
the
the
the
rate,
the
gigabit
rate
limit,
but
on
faster
networks,
10
25,
40,
56,
100
gigabit.
A
We
think
the
cpu
cost
is
going
to
be
the
limiting
factor.
I'm
also
a
little
concerned
about
scalability
on
the
server
side,
because
the
server
is
basically
going
to
be
spending
a
lot
of
time,
handling,
encryption
and
decryption,
for
you
know,
potentially
thousands
tens
of
thousands
of
clients.
So
we
think
that's
going
to
be
an
issue.
A
Yeah
I
mean
I'm
talking
about
the
the
exceptional
and
you
know
high
performance
cases.
At
this
point,
and
also
I
have
to
be
honest,
I
don't
have
numbers
we're
just
looking
at
our
experience
with
encrypted
gss,
that's
space,
that's
our
our
basis
for
the
for
this,
for
this
prediction.
A
There
are
some
features
that
we're
not
really
expecting
much
benefit,
quick
features
that
we're
not
expecting
to
benefit
much
from
one
is
zero
rtt.
Most
rpc
connections
are
long-lived,
especially
the
nfs
ones.
Basically,
the
clients
connect
and
they
use
as
much
of
the
connection
as
they
can.
A
And
nick
has
shared
some
performance
data
in
the
chat
room
if
people
are
interested
in
that,
so
there
might
be
some
benefit
for
situations
where
the
client
does
some
port
mapping
or
other
arcane
things
at
the
beginning
of
connection.
But
after
that,
really
you
know
having
a
two
or
three
handshakes
instead
of
one
is,
is
really
not
much
different,
so
I
don't.
A
I
don't
think
rtt
is
going
to
be
especially
interesting
as
it
is
for
things
like
web
web
client,
where
they're
making
lots
of
connections
all
at
the
same
time.
Over
and
over
again,
we
also
had
a
dream
of
being
able
to
use
datagrams
in
a
reliable
context
and
quick
doesn't
have
reliable
datagrams.
It
has
an
unreliable
datagram
extension.
That's
that's
in
the
works,
so
we're
still
going
to
need
some
record
fragment
framing
that
is
similar
to
what
we
do
with
tcp
streams
each.
A
Of
course,
each
stream
on
a
quick
connection
would
have
its
own
framing.
A
I'm
sorry,
it's
just
reading
some
comments
in
the
chat
room
and
such
a
document
would
spell
out
the
the
framing.
Basically,
you
know
repeating
the
the
same
description
of
the
framing
that
we
have
for
tcp.
Just
that
the
single
single
32-bit
word
in
front
of
each
rpc
record.
A
We
need
to
request
appropriate
net
ids.
We
have
two
for
each
of
tcp,
udp
and
rdma.
So
we'd
probably
need
something
like
quick
and
quick
six,
but
we
need
to
discuss
the
guidelines
around
how
rpc
consumers
like
nfs
might
want
to
use
multiple
streams
on
one
connection,
and
then
I
mentioned
the
trpc
transport
nomenclature
at
the
bottom.
I
don't
think
the
itf
has
anything
binding
to
say
about
those,
but.
A
We
have
to
do
something
that
that
describes
connections
that
that
allow
multiple
streams
per
connection-
and
maybe
maybe
we
just
pumped
on
that
and
say
we're
not
going
to
comment,
but
I
think
for
user
space
implementations
that
depend
on
the
ti
rpc
api.
We're
going
to
have
to
have
some
some
some
thought
behind
that.
A
Some
quick,
quick
transport,
specific
issues,
let's
remind
myself.
What's
on
this
slide,
oh
yeah,
I
was
having
a
conversation
with
one
of
the
smb
over
quick
engineers
at
sdc
a
couple
years
ago,
and
he
was
mentioning
that,
because
quick
runs
over
udp,
there's
gonna
have
to
be
some
some
discussion
of
how
an
mfa
server
that
listens
on
udp
ports
will
be
able
to
steer
traffic
to
quick
and
udp.
A
At
the
same
time,
I
think
that's
just
gonna
require
some
some
prototyping
and
then
some
some
language
in
the
in
the
the
transport
document
that
we
that
we
come
up
with
we
can.
We
can
reuse
the
some
of
the
rpc
with
tls
specification,
in
particular
the
alpn
and
certificate
usage
guidelines,
and
that
in
that
document
would
apply
to
quickv1
as
well.
Since
quickv1u
uses
the
tlsv
1.3
handshake
protocol,
that
that
is
something
that
we
can
reuse.
A
And
then,
probably
in
a
separate
document,
we
would
discuss
some
of
the
things
that
are
particular
to
nfs
and
its
relations
to
the
rpc
transports
that
it
that
it
runs
on,
as
I've
mentioned
several
times
already,
because
we
have
multiple
streams
allowed
up
for
connection
we'll
we'll
have
to
address
that
somehow
for
nfs
in
particular,
for
nfs
v4
sessions.
A
I
think
lars
mentioned
that
there
can
be
two
to
the
62
streams
per
connection.
We
could
probably
set
up
an
individual
stream
for
each
slot
in
a
session,
for
example,
if
we
wanted
to
do
that,
we'd
have
to
answer
some
questions
about
what
bind
condis
session
does
now
that
there
are
multiple
streams
and
a
connection.
A
We
also
have
this
issue
about
nfsv4
requiring
a
server
to
drop
a
drop.
The
connection,
if
it
ever
loses
an
rpc
transaction
clients
depend
on
that.
Would
we
change
that
to
be
like
the
the
server
terminates,
a
stream
when
it
when
it
it
drops
of
an
rpc?
We
will
have
to
have
some
discussion
around
that
to
figure
out
exactly
how
rpc
transaction
loss
is
signaled
by
servers
on
quick
transports.
A
A
So
I
had
a
few
questions
for
the
working
group
as
a
whole.
Do
we
have
consensus
to
be
in
work
on
this?
A
Maybe
we
need
to
take
that
to
the
that
question
to
the
list,
or
we
could
do
a
quick
poll
here
to
see
if
there's
anybody
who
opposes
beginning
work
on
rpc
over
quick,
I
also
had
a
question:
that's
really
just
due
diligence,
whether
we
need
a
charter
update
for
the
working
group
to
work
on
rpc
over
quick.
I
thought
the
question
was
pertinent.
A
A
Hearing
no
objections:
I
think
we
should
probably
go
ahead
with
this
work
and
we'll
need
to
assign
milestones.
Maybe
we
can
do
that
on
list.
Go
ahead.
F
Said
yes,
so
I
I
think
I
think
that
that
was
a
very
good
presentation,
at
least
at
least
for
me.
I
I
see
there
are
like
there
are
a
couple
of
options
and
I
also
see
like
answers.
F
So
if
you,
if
we
would
like
to,
I
mean
so,
I
would
like
to
understand
like
why
why
we
are
doing
it
and
what
what
quantifies
the
gain
we
get
out
of
it
like
what
are
the
problems
that
we
already
see,
that
was
gonna,
that
rpc
over
quick
is
going
to
help
with.
So
what
is
what
is
our
goal
here?
Actually
I
mean,
I,
I
see
your
slides
in
the
beginning.
I
didn't
get
those
like
really
concrete
answers
and
there
is
no
quantifiable.
F
I
cannot
quantify
the
gain,
that's
what
I'm
saying
like
I
mean
I
don't.
I
just
don't
want
us
to
work
with
this,
just
because
this
is
something
cool
to
work
with,
but
we
also
understand
like
what
is
the
achievements
and
what
is
what
what
we
try
to
achieve
here.
So
I
just
want
to
clarify
those
things
to
the
working
group.
A
Now,
that's
that's
absolutely
fair.
I
think
I've
communicated
my
own
fuzziness
on
these
ideas
already.
I
I
don't
have
good
answers
and
I
think
that
you
know
the
initial
slide
I
put
up.
That
said,
it's
not
clear!
Why
we'd
be
doing
this
that
that
that
says
it
all.
We
don't
know
those
answers
yet
and
I'm
not
clear
on
exactly
how
I.
F
Maybe
I
would
like
us
to
actually
have
more
discussions
on
it
on
the
mailing
list
and
we
understand
try
to
do
what
we're
trying
to
achieve
and
what's
gain
we're
trying
to
get.
I
mean,
obviously
this
this
is,
I
mean
you
know
like
I'm.
I'm
super
happy
to
see
like
people
are
picking
up
quick
and
trying
to
do
a
lot
of
thing.
We
had
another
discussion,
a
media
over
quick.
This
is
rpc
work
week.
This
these
are
cool
things
to
do.
Do
I
think,
but
why
and
what?
F
A
Okay,
so
that's
probably
one
vote
for
putting
this
off
until
we
have
a
better
use
case
for
it,
and
you
know
I
I
think
that's
a
perfectly
fair
thing
to
ask
for
so.
D
A
A
I've
I've
asked
to
share
these
slides
again
that
different.
A
Okay,
this
is
an
update
on
rpc
of
rdma
version,
two,
some
kind
of
a
status
report
and
a
summary
of
the
technical
issues
that
facing
right
now.
A
Just
have
11
slides
here
and
see.
What's
my
time
like
okay,
I've
been
working
on
a
actual
implementation,
both
client
and
server
are
implemented
in
very
basic
form.
I
haven't
been
able
to
implement
the
v2
credit
accounting
without
without
flow
control,
adequate
flow
control.
A
There
are
some
pieces
of
the
version,
two
specification
that
cannot
be
prototyped,
so
I'm
kind
of
blocked
on
having
a
flow
control
implementation
right
now,
and
the
reason
for
that
is
that
the
the
text
in
section
four
two
one
one
in
the
dash
zero
four
revision,
is
basically
garbage.
A
It
as
as
I
tried
to
implement
it,
I
found
that
it
was
just
not
going
to
be
adequate
for
what
we
needed
to
do
and
the
reasons
are
on
the
slide.
I
don't
really
need
to
go
into
those,
but
they're
they're
here
for
completeness.
A
So
what
I,
what
I
want
to
do
instead
is
implement
an
actual
classic
credit
credit
based
flow
control
mechanism
in
rpc,
rdma
v2
go
back
to
having
a
full
32-bit
field,
that's
the
credit
grant
or
in
classic
terms
of
the
windows
size,
and
then
we
would
need
a
second
field
that
that
is
usually
it
usually
contains
a
sequence
number,
but
we
can
also
use
the
number
of
messages
that
have
been
received
by
the
peer
since
the
connection
was
established
because
that's
an
implicit
sequence
number,
so
we
need
to
the
the
remote
end
needs
to
advertise
that
number
to
the
sender
so
and
so
it
can.
A
The
sender
can
gate
it's
its
transmission.
I
did
reach
out
to
to
janna
twice.
I
was
not
able
to
get
a
response.
Martin
suggested
talking
with
him,
but
without
that
I
just
sort
of
went
ahead
and
and
built
something
that
I
thought
would
be
reasonable.
My
co-author
dave
has
looked
at
this
and
and
thinks
it's
it's
reasonable
ahead
with.
We
don't
have
a
location
in
the
header
message
for
the
second
field.
That's
the
only
thing.
That's
stopping
me
from
actually
implementing
it
right
at
the
moment.
A
Go
ahead.
It's
ahead.
F
F
So
I
can
take
an
action
point
if
you
think
like
that.
A
Yeah,
what
I'm
looking
for
is
some
expert
review,
I'm
hoping
that
that
the
language
now
in
the
editor's
copy
on
the
github
is
clear
enough
that
it
that
you
know
it's
just
a
page
or
two
and
will
be
easy
for
someone
to
who
understands
these.
The
theory
here
to
sit
down
and
look
at
and
go
yeah.
That's
that's
good!
No!
That's
stupid!
Do
it
this
other
way.
A
There
was
some
complaint
on
the
list
a
few
months
ago
when
I
brought
this
up
earlier
that
we
need
to
be
careful
not
to
spell
to
be
too
specific
about
the
algorithms
that
are
being
used
by
the
center
and
receiver.
I
hope
I
haven't
crossed
that
boundary.
I
I
hope
the
language
in
the
document
describes
the
protocol
and
and
doesn't
doesn't
really
lock
either
any
implementation
into
into
a
particular
algorithm.
A
A
As
I
was
looking
over
the
quick
rfcs
recently
published,
I
noticed
that
they
have
ayanna
registries
for
error,
codes
and
transport
properties,
and
I'm
wondering
if
our
pc
have
already
made
version.
Two
should
do
the
same
or
maybe
include
other
aspects
of
the
protocol.
Like
the
message
header
numbers,
that's
certainly
something
we
can.
We
can
take
that
discussion
to
the
list.
We
don't
have
to
go
into
that
here.
A
So
now,
what's
up
for
the
working
group
you're
still
sort
of
waiting
for
prototypes?
I
think
we
have
a
milestone
date
to
deliver
this
document
by
december
of
this
year
this
calendar
year.
I'm
not
convinced
we're
going
to
make
that,
because
there
are
so
many
items
in
this
specification
that
are
new
since
v1.
The
credit
accounting
protocol
is
one
of
them,
but
other
things
like
peer
authentication
and
transparent
properties
and
other
things
or
in
particular
message.
A
Training
are
things
that
I
would
like
to
have
under
our
prototype
belt
before
we
we
move
forward
with
with
a
working
group.
Last
call.
A
I'm
also
wondering
whether
we
should
consider
the
amount
of
work
that's
needed
on
the
prototype
and
the
and
the
document
itself
compared
to
some
of
the
other
things
that
we're
interested
in
working
on
right
now,
especially
tls
and
the
and
the
revision
of
rfc
5661.
A
As
far
as
I
know,
right
now,
the
linux
prototypes
are
the
only
ones.
I
would
like
to
see
a
second
prototype
at
least
brought
to
testing
events,
so
that
we
can
have
some
interoperability,
some
confirmation
of
interoperability,
and
I'm
wondering,
if
maybe
we
should
extend
the
milestone
farther
than
just
maybe
a
year
or
understand
what
the
priority
of
this
work
is,
so
that
we
can.
We
can
focus
on
some
of
the
things
that
that
more
people
are
interested
in
than
just
this
one.
This
particular
new
version
of
protocol.
A
Comments,
okay,
well
I'll,
ask
again
the
mailing
list
and
I
just
want
to
throw
up
this
slide
for
30
seconds.
This
is
my
plan
for
the
prototyping
of
in
linux.
A
Hopefully
I
can
get
this
credit
accounting
protocol
nailed
down
and
then
the
next
most
interesting
thing
I
would
like
to
look
at
is
a
message
continuation
because
I
think
that's
a
that's
a
winning
unique
feature
to
v2
that
doesn't
that
v1
just
doesn't
have
it's
a
big
win
and
then
after
I
get
those
under
my
belt,
hopefully
we
can
look
at
transport
properties
and
meanwhile,
I
feel
I
still
feel
like
peer
authentication
is
underspecified
in
that
document,
and
I
think
that
that
needs
some
some
expert
attention
as
well,
but
I
think
the
credit
accounting
protocol
is
is
the
is
the
priority
right
now,
once
we
we
get
a
little
farther
with
the
prototyping.
A
A
F
A
Yeah,
I
don't-
I
don't
know,
what's
going
on
there,
all
right?
Well,
let's
just
go
with
this
one.
So
as
long
time,
hangers
on
know,
I've
been
working
on
various
forms
of
data
integrity,
around
file
systems
for
quite
some
time
in
linux.
A
There's
this
thing
called
integrity
measurement
architecture,
they're
kind
of
interested
in
being
able
to
extend
that
out
from
local
file
systems
only
to
support
remote
file
systems
as
well,
but
we
began
encountering
a
number
of
issues
with
that,
namely
we
needed
a
standard
wave
of
expressing
the
the
metadata
that
non-linux
systems
could
recognize
and
deal
with,
and
we
also
hit
the
problem
of
some
of
these
systems
would
have
a
problem
with
a
gpl
only
format.
A
So
the
easy
part
is
creating
attestation
metadata
that
you
know
a
software
vendor
would
basically
create
a
digest
over
like
a
an
executable
or
a
library
and
then
sign
that
digest
and
include
the
algorithms
used
to
create
the
digest
and
create
the
signature
in
that
metadata
and
then
attach
the
metadata
to
the
to
the
file
content
and
distribute
the
med
date
with
the
file
content
to
end
users.
A
So
and
then
the
hard
part
is
on
the
end
user
system,
because
applications
want
to
read
these
these
executables
in
small
pieces,
especially
libraries.
A
You
know
an
application
isn't
going
to
want
to
read,
use
the
entire
library
they're
just
going
to
want
to
use
little
bits
of
it,
and
you
know
a
linear
digest
like
md5
or
sha-1.
For
example,
you
know
you've
got
to
read
the
entire
file
to
compute
the
the
the
digest
and
verify
it,
and
the
problem
is
that
that
sort
of
is
not
real.
A
It
doesn't
it's
an
impedance
mismatch.
Shall
we
say
with
the
way
virtual
virtual
memories
managed
on
end
user
systems,
especially
on
on
memory
exhausted
systems.
A
Certain
parts
of
the
file
can
be
purged
from
memory
and
that
memory
reclaimed
for
other
purposes,
in
which
case
then
you
know
the
next
time
the
file
is
read
from
untrusted
media.
It's
got.
It's
got
to
be
read
as
a
whole
block
in
order
to
confirm
that
it,
the
digest,
hasn't,
hasn't
changed
the
file
measures
the
same
way
as
it
did
before
so
to
resolve
that
issue.
People
turn
to
a
tree
of
digest,
often
known
colloquially,
as
merkel
trees.
A
The
the
only
issue
with
merkel
trees
is
that
they're
not
they're,
not
a
fixed
sized
piece
of
metadata.
You
know
a
linear
hash
like
sha-1
or
shot
256
digest,
that's,
you
know,
usually
less
than
a
than
four
kilobytes
in
size,
much
less
in
fact,
and
so
that's
easy
to
guarantee
that
you
know
a
file
system
can
store
that
that
little
bit
of
metadata,
but
a
hash
tree
can
get
quite
big
like
into
100
hundreds
of
megabytes.
A
A
If
we
want
to
make
merkle
trees
useful,
we
have
to
reduce
the
amount
of
data
that
is
stored,
so
what
we,
what
what
I'm
proposing
is
just
signing
and
storing
the
root
hash
of
the
tree
and
then
when
the
tree
is
installed
on
an
end,
you
know
the
the
software
distributor
would
sign
the
hat
the
root
hash
and
and
attach
that
as
the
metadata
and
then
on
the
end
system.
A
You
basically
reconstitute
that
tree
and
you
can
either
store
it
on
local
storage
if
that,
if
the
local
file
system
can
store
hundreds
of
megabytes
of
of
sidecar
data
or
you
can
cache
the
tree
in
memory
and
use
it
on
demand
anytime,
you
read
you
open
the
file
either
one
is
possible
whatever.
Whatever
you
have
facilities
for
you.
Can
you
can
choose
between
the
two
or
a
combination?
A
Second,
we
needed
a
standard
format.
That's
not
encumbered,
so
I
thought
maybe
an
x
509
certificate
might
be
good
for
that.
We
basically
put
the
signed
root
hash
or
we
put
the
root
hash
in
the
certificate.
A
The
certificate
then
is
signed
so
that
signs
the
hash
and
then
its
representation
format
is
der.
A
So
the
combination
of
these
two
things
would
give
us
the
kind
of
letter
of
approval
that
we
might
want
to
ship
with
distribute
with
the
software
that
is
installed
on
end
user
systems.
So
I
have
these
these
questions.
I'm
not
sure
this
is
the
right
forum
for
for
an
interactive
discussion
of
these
things,
but
so
I'd
just
like
to
have
people
think
about
this.
A
little
bit,
maybe
I'll
I'll,
take
all
of
this
to
the
list.
A
But
the
main
question
is:
has
this
been
done
before?
I'm
really
not
interested
in
duplicating
this,
but
rather
providing
something?
That's
that
is
needed,
but
hasn't
been
done
before
any
initial
comments
on
this.
A
A
B
E
B
B
B
B
C
C
A
If
I
may
make
a
modest
proposal
yeah,
maybe
we
should
plan
an
interim
meeting
where
we
can
use
something
like
webex.
B
B
We
don't
have
a
prepared
presentation
on
the
future
work.
There
was
just
a
couple
of
notes
in
the
agenda
that
was
for
the
steve
as
you
as
you
noted,
the
pnfs
nvme
and
pnfs
rdma
work,
if
I
don't
have
a
status
on
where
we
left
that
and
and
then
dave
had
a
note
about
ideas
to
address
other
needs
which
are
part
of
our
charter,
but
not
being
given
serious
attention.
D
B
It's
got
a
point:
yeah,
hey.
B
B
I'm
watching
the
chat
down
in
nine
minutes.
Why
don't
we
take
the
future
work
first
to
the
mail
list
and
decide
how
we
want
to
proceed
with
it?
I
don't
think
we
have
the
people
in
the
room
right
now
that
could
actually
give
us
a
kind
of
where
we
left
the
pnfs
stuff
chuck.
A
Well,
we
we
had
dave
black.
I.
G
G
E
Last
person
with
with
whom
a
version
of
that
draft
is
lapsed,
copious
spare
time
has
been
limited
supply
to.
B
E
A
E
I
was
able
to
get
an
xml
file
and
put
it
into
a
tool
into
a
tool
that
I
can
use
and
with
luck
we
won't
get
bit
by
the
changes
to
the
the
xml
template.
A
workaround
is
known,
which
is
just
don't
send
notes.
I
know
the
text,
they
don't
care,
I'm
hoping
that
that
that
that
will
be
the
case.
The
tool
chain
issue
was
was
it
was
overcome.
Last
time
I
was
able
to
work
on
this.
A
A
And
plan
a
plan,
a
meeting
with
technology
that
works
for.
B
Getting
a
thumbs
up
on
the
chat,
let's
call
this
meeting
to
a
close
and
we'll
we
have
several
action
items
to
take
some
discussions
to
the
mail
list:
okay,
including
the
future
work
discussion.