►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
thank
you
everybody
for
being
here.
So
my
the
first
topic
today,
something
I'm
presenting
on
is
around
z
standard
in
the
browser
or
z
standard,
as
some
of
you
all
would
say
so.
Z
standard
is
a
compression
algorithm.
It's
a
kind
of
a
recent
one
that
has
been
prioritized
by
the
facebook
team.
A
So
I
will
preface
this
by
saying
I
am
not
an
expert
in
z
standard.
I
we
I've
used
it.
I
understand
it
a
little
bit:
we've
experimented
with
it
I'll
go
a
little
bit
into
that
for
optimize
use
cases,
there's
probably
other
people
on
this
call
that
are
even
more
familiar
and
we
could
rely
on
them
if
we
have
any
more
detailed
questions.
A
So
that's
one
thing:
the
other
thing
is:
I
realize
that
compression
schemes
themselves
are
not
necessarily
in
scope
for
the
web.
Perf
working
groups.
Deliverables
necessarily,
however,
there's
a
lot
of
people
here
that
I
think
are
passionate
about
performance,
and
this
is
a
decent
forum
to
make
a
pitch
like
this.
A
Also,
we
do
have
the
compression
streams
api
that
this
working
group
has
looked
at
in
the
past
and
that
api
could
potentially
use
z
standard
as
one
of
the
content
encodings
in
addition
to
gzip.
That
is
today.
A
So
with
that
being
said,
let's
jump
into
it
a
little
bit
so
again,
z
standard
is
published
by
facebook.
They
have
a
home
page
as
well
as
the
rfc
8878.
A
There
are
there's
a
lot
of
so
it
was
released.
I
think,
a
couple
years
ago,
maybe
three
or
four
years
ago
it's
been
adopted
around
the
web
in
various
tooling
databases
in
place,
storage
services
et
cetera,
there's
not
a
much
web
server
support
for
it.
Yet
I
believe
there
is
a
nginx
module
to
compress
content
with
c
standard,
I'm
not
sure
if
it's
in
any
others,
I'm
not,
I
don't
believe
any
of
the
other
cdns
any
cdns
support
it.
A
A
We
are
using
it
for
at-rest
log
storage,
where
it's
provided.
Some
benefits
significant
benefits
over
d-zipping
logs,
notably
it's
faster
and
provides
smaller
files,
which
is,
I
think,
a
win-win
for
our
use
cases.
A
So
we
have
started
experimenting
with
it
internally
and
we
are,
you
know,
looking
into
other
potential
uses
for
it,
and
we
think
that
the
browser
use
case
might
be
a
good
fit
so
to
kind
of
share
the
benefits
of
it
or
kind
of
where
it
fits
in
the
world
of
other
compression
algorithms
out
there,
the
z
standard
repo
publishes
this
research
that
they
did.
A
I'm
not
entirely
sure
of
the
corpus
that
they're
using
here
of
the
documents
that
they're
using,
but
this
is
one
benchmark,
if
you
will
that
they
are
that
they
have
published.
So
the
way
that
you
may
read
this
chart
is-
and
I
think
you
can
see
my
mouse-
I'm
hoping
it's
not
too
big,
but
it's
there.
A
At
least
this
dark
green
line
is
zealand
so
on
the
x-axis
you
have
speed,
so
you
want
to
be
further
left
to
be
faster
and
on
the
y-axis
you
have
compression
ratio,
so
higher
up
is
better
compression.
So,
ideally,
you
want
to
live
in
the
upper
left.
Quadrant,
if
you
look
at
zlib's
compression
rate
over
its
various
levels
of
one
through
nine,
where
one
is
fast
but
not
very
good,
compression
and
nine
is
the
highest
compression.
You
get.
A
You
can
kind
of
see
the
curve
that
it
offers
as
far
as
speeds
versus
compression
as
you
get
higher
compression
ratios.
Obviously,
the
speed
and
megabytes
per
second
drops,
I
think,
zlibs
default
is
around
six.
This
is
around
the
sixth
level
and
you
know
beyond
there
you
don't
get
much
higher
compression
ratio,
but
you
lose
speed.
A
You
could
compare
their
speeds,
so
zlib
lives
here
at
you,
know,
80
megabytes
per
second
or
so,
but
for
an
equivalent
compression
ratio
with
the
others,
so
you're
getting
similar
sizes,
you
often
get
faster
speeds
with
broadly
and
even
faster
speeds,
with
z
standard
and
as
you
move
up
in
compression
ratio
sizes,
if
you
hold
that
similar
for
all
three.
So
this
is
about
the
the
default
for
zlib.
A
A
A
If
you're,
comparing
the
compression
ratio
versus
bradley
and
z
standard
they're,
much
higher,
you
know-
I
I
don't
know
when
real
percentage
points,
what
that
is
but
say,
10
20
30
is
often
what
we're
seeing
with
our
experimentation.
A
So
again,
this
is
one
corpus
that
we're
looking
at
and
there's
a
lot
of
different
types
of
data
that
you
can
compress
on
the
web,
but
this
kind
of
gives
you
just
a
high
overall
view
of
what
the
different
compression
algorithms
may
give
you,
and
I
think
the
interesting
thing
in
this
graph
to
me
is
kind
of
the
difference
in
output
for
bradley
versus
z
standard.
So
that's
between
the
salmon
and
blue
lines
here
now
at
the
higher
levels
of
compression.
A
So
when
you
go
to
bradley
11,
for
example,
that's
this
not
up
here
and
z
standard
again,
also
you
have
higher
levels
of
compression,
they
kind
of
start.
You
know
performing
kind
of
similar
they're
around
the
same
line
around
the
same
curve,
but
on
the
lower
levels
of
compression.
So
these
are,
let's
say,
bradley,
four
or
6,
which
I
think
are
common
for
dynamic
content
and
some
of
the
lower
sea
level
levels
you
for
the
for
an
equivalent
compression
ratio.
You
get
better
speed
with
z,
standard
or
for
an
equivalent
speed.
A
A
A
A
You
know
a
big
use
case
for
it
today
is
encoding
static
content
where,
if
you
are
either
pre-compressing
your
say,
css
and
javascript
files
stuff,
that's
not
going
to
change
at
the
highest
levels
of
bradley
bradley
11.
You
can
get
really
good
byte
savings
over
gzip,
and
you
know
a
lot
of
people
are
doing
that
they're
pre-compressing
as
part
of
their
build
process
or
they're
offline,
compressing
it
at
the
cdn.
A
But
these
high
levels
of
compression
are
very
cpu
intensive
and
are
not
often
suitable
for
doing
it
on
the
fly
like
doing
it
when
requested
every
time.
So
you
know
both
in
both
broadly
and
standard.
Looking
back
at
the
chart
are,
you
know
very
similar
in
characteristics
for
compression
at
the
very
highest
levels
when
you're
spending
the
most
cpu,
and
so
I
don't
think
you
know
those
types
of
content
are
necessarily
the
the
best
utilization
of
the
standard
on
the
web.
A
A
I
don't
know
I
I
can't
quantify
that
for
you
right
now.
I
think
there's
some
interesting
research
that
can
be
done
here,
but
it's
certainly
you
know
less
cpu,
I
think,
is
good.
In
many
cases
you
know
less
cpu
also
translates
into
less
infrastructure
costs,
whether
that's
at
the
origin
or
at
the
edge,
and
it's
also
been
shown.
You
know
from
benchmarking
that
z
standard
is
very
fast
decompression
in
the
client.
A
A
So
that's
one
of
the
use
cases
I
think
would
be
interesting
to
explore,
which
is
mostly
for
dynamic
content
on
on
the
web.
The
other
thing
would
be
in
user
land
space.
So
we
have
this
compression
streams
api.
A
It
allows
you
to
compress
any
blob
in
javascript
today,
just
with
cheat
with
gzip
it's
usable
for
a
lot
of
different
things.
I
I'm
not
sure
exactly
how
much
it
is
used,
but
I
think
the
interesting
thing
that
z
standard
could
provide
if
it
were
part
of
the
correction
streams.
Api
would
be
providing
a
much
faster
way
of
compressing
for
the
same
ratio
in
user
land
in
javascript,
or
you
know
even
more
compression
ratio
than
you
get
out
of
gzip
and
that's
kind
of
this
green
zone.
A
Here
the
difference
between
z
standards,
compression
levels
and
speeds
versus
z-libs,
you
can
use
compression
streams
for
a
lot
of
different
things.
Obviously,
just
you
can
compress
and
persist
and
read
data
out
of
indexeddb,
your
your
binary
blobs.
Maybe
that's
important
for
some
aspects
for
an
app
to
compress
data
before
it
stores
it
within
the
browser.
A
A
Optimizations
around
compressing
the
resource,
timing,
tree
and
stuff,
like
that,
it
would
be
useful
and
we've
looked
into
potentially
using
the
compression
streams
api
to
pre-compress
our
beacon
data
with
gzip,
or
you
know
something
in
the
browser
before
we
send
it
out
and
just
have
our
server
read
the
gzip
upload,
but
part
of
the
concern
you
know.
A
Part
of
the
hesitation
in
that
is
using
gzip
is
still
costly
from
a
cpu
time
point
of
view,
and
that's
where
I
think
you
know
we
would
love
to
potentially
explore
z,
z
standard
as
another
way
of
pre-compressing
our
data
before
we
upload
it,
especially
if
we
can
do
it
for
very
fast.
A
A
The
third
thing
something
another
big
thing
that
you
may
persist
and
send
somewhere
is
the
javascript
self-profiling
data.
If
you
just
look
at
the
raw
json
stringified,
you
know
content
of
your
profile,
it's
just
a
ginormous
string
content
and
it
could
benefit
from
being
pre-compressed
or
user
land
compressed
with
something
like
c
standard.
A
So
those
are
the
two
main
use
cases.
Certainly
it's
not
all
sunshine
rainbows
with
any
new
compression
algorithm,
there's,
obviously,
some
trade-offs
that
you
have
from
the
research
that
I've
read.
It's
not
z
standard
is
not
as
good
at
compression
on
small
files
and
some
of
that
maybe
bradley
does
ship
on
the
web.
Today
with
a
shared
dictionary,
then
I
can
go
over
to
your
dictionaries
in
a
moment,
so
we
would
probably
need
to
have
some
sort
of
similar
shared
dictionary
for
z
standard.
A
Once
you
get
a
larger
file,
you
know
the
files
content
itself
kind
of
helps
build
up
its
own
compression,
but
if
you're
working
with
a
lot
of
small
independent
files,
they
may
benefit
from
sort
of
shared
dictionaries.
To
start
out
with
another
downside
is
anytime.
You
were
talking
about
adding
browser,
support,
we're
talking
about
adding
binary
size
increase
from
the
library
right,
so
that
is
a
non-zero
cost.
A
One
of
the
ways
that
z
standard
improves
compression
over
gzip
and
others
is
by
using
more
memory,
so
gzip,
I
think,
has
by
default
like
a
32
kilobyte
buffer
that
it
works
with
and
with
the
higher
z
standard
levels
you
that
buffer
can
grow
into
the
megabytes,
and
I
believe
that
the
compression
level
also
decides
the
decompression
memory
required.
A
I
may
be
wrong
on
that,
so
there
is
definitely
a
memory
tradeoff
here,
basically
you're
getting
cpu
speed
increases
and
compression
ratio
increases
for
more
memory
usage
and
then
from
a
cdn
perspective
you
know,
of
which
akamai
is
one.
There
is
obviously
some
downside
to
potentially
having
to
store
the
standard
content.
A
If
you
think
about
how
cdns
may
store
various
content
in
their
caches.
It's
often
keyed
based
on
the
content
encoding,
because
different
browsers
speak
different
content,
encodings
except
encodings,
and
so,
for
example,
today
browsers
are
probably
if
they
are
serving
up
or
if
cdns
are
serving
up
broadly
content.
They
probably
have
potentially
a
gzipped
version
and
a
broadly
version
sitting
alongside
each
other,
and
this
would
add
another
cash
key,
essentially
with
z
standard.
A
Implementing
this
and
there's
always,
the
just
kind
of
challenge
of
origins
are
often
speaking
gzip
still
today
and
relying
on
the
cdn
to
translate
that
to
broadly
or
something
else,
and
so,
if
origins
are
still
speaking,
one
compression
scheme
and
the
cdn
wants
to
give
a
better
one
to
the
end
user.
There's
always
like
a
decompression
decompression
that
needs
to
happen
until
origins
also
speak
the
same
compression.
A
So
it's
not
you
know
it's
not
a
just
a
100
win.
I
think
there's
still
a
lot
of
research
and
experimentation
that
both
browsers
and
cdns
and
others
would
want
to
do
here.
Last
thing:
I'm
going
to
touch
on
really
quickly
is
dictionaries
so
bradley
ships.
Today,
my
understanding
with
a
120
kilobytes
standard
dictionary
for
the
web,
I've
read
some
discussions
that
may
be
a
little
bit
biased
towards
older
web
content.
As
some
of
the
seed
of
the
dictionary
this
particular
language
focused,
it
doesn't
necessarily
have
all
languages
in
it.
A
As
a
you
know,
as
the
seed
data,
I
think,
z
standard
would
probably
need
to
ship
with
something
similar
for
the
web,
especially
for
smaller
files,
smaller
content,
and
so
there's
always
the
question
of
delivery.
Of
that
you
know
if
we
could
do
it
like
bradley,
where
it
would
be
another
120,
plus
kilobyte
payload
that
would
ship
with
it,
but
there's
also
the
potential
z
standard
supports
other
custom
dictionaries.