►
From YouTube: 2018-01-17 Performance WG
Description
Agenda and Notes:
https://docs.google.com/a/mesosphere.io/document/d/12hWGuzbqyNWc2l1ysbPcXwc0pzHEy4bodagrlNGCuQU/edit?usp=drive_web
A
A
B
B
C
C
And
then
I
believe
I
have
to
start
the
recording
or
maybe
it's
already
occurring.
C
C
C
C
C
D
C
B
B
B
So
this
test
is
conducted
on
a
13-inch
MacBook
Pro
2016
version
has
three
points:
we
gigahertz
Intel
coins
and
it
is
billed
wisdom.
It's
optimized
configuration.
So
here
are
the
results.
So,
first
of
all
most
we
are
interested
in
comparing
the
performance
between
different
API
is.
There
was
reporting
that
v1
API
is
are
slower
than
v-0
and
we
want
to
find
out
whether
it
is
it
is
the
case
and
how
much
slower
it
is
so.
B
C
B
B
G
B
So,
first
a
further
1.3
version.
We
can
see
that
that
II
want
protobuf
is
slightly
faster
actually
than
the
v-0
about
3%,
but
for
d1
JSON,
it's
much
slower
than
than
the
other
two
api's.
It's
like
6,
it's
like
7x,
slower
and
then
the
other
AP
is
on
average,
and
here
are
the
1.4.
The
result
is
roughly
the
same,
so
home
average
v1
protobuf
is
6%
faster
and
JSON
is
even
slower,
close
to
8x
and
well
for
the
host
1.5.
So
we
did
a
couple
quick
optimizations
for
the
v1
API.
B
B
D
B
B
C
I
think
to
add
to
a
mang
said:
building
up
the
entire
JSON
object
is
very
expensive.
We
learned
that
from
when
we
were
optimizing,
v-0
and
v1
JSON
was
doing
that
it
was
building
up
the
entire
JSON
object
by
copying
all
the
protobuf
data
into
a
JSON
object.
So
we
instead
just
go
directly
from
the
protobuf
object
that
we
have
in
hand
and
convert
that
directly
to
serialize
JSON
yeah
using
JSON
Phi
yeah.
B
C
B
D
D
Okay,
I
think
that's
funny
what
looking
just
to
make
sure
it's
not
like
in
V
when
we're
giving
out
more
data
than
what
we
were
giving
in
v-0.
Maybe
the
numbers
are
skewed
because
of
that
as
well.
It
is
not
like
one
is
to
one
mapping:
I,
remember
correctly,
on
what
we
send
in
get
state
in
v1
versus
what
we
used
to
send
in
v-0.
So
it's
funny
what
just
doing
a
rough
check
to
make
sure
it's
not
significant
difference
in
the
actual
response.
Payload.
C
I
mean
comparing
JSON
or
yeah.
C
C
C
So
since
they
dominate
the
overall
data
when
it
comes
to
the
large
clusters
or
they
should
at
least
hopefully,
hopefully,
the
extra
differences
are
become
negligible
at
those
scales,
there's
just
a
lot
of
there's
still
a
lot
of
inefficient.
That
is
in
place
if
we
did
want
to
optimize
that
the
Prada
buff
stuff
further
in
v1,
paraquat
I
think,
ideally,
ideally
v1
would
be
faster
than
v-0,
whether
you're
using
put
a
buffer
JSON,
and
that
would
be
a
nice
incentive
for
people.
Yeah.
C
That
so
there's
a
variety
of
improvements
right.
Some
of
them
are
small
and
easy,
but
I
don't
think
those
are
gonna.
Get
the
JSON
below
v-0.
Some
of
the
more
involved
changes
would
definitely
do
that,
but
they're
quite
complicated
like
if
we
want
to
stream
directly
from
these
zero
protobufs
to
v1j
son.
That
would
be
very
efficient,
but
it's
it's
it's
difficult.
So
do
that
change.
D
D
D
C
Yeah
I
mean
yeah
anything.
This
low-hanging
fruit
here
is
probably
worth
doing
it's
just
some
of
the
more
substantial
improvements
would
come
from
some
substantial
changes.
We
still
didn't
explore
using
arenas.
Those
from
what
I've
seen
in
the
past,
don't
improve
performance.
A
lot
in
the
single
threaded
case,
which
is
what
this
benchmark
is,
but
in
a
multi-threaded
scenario
like
in
an
actual
cluster,
it
helps
a
lot.
D
D
C
So
we
could,
we
could
go
down
that
route,
especially
when
it
comes
to
evolving
messages
and
building
up
building
up
the
v1
message
or
the
v-0
message.
Originally
sorry,
that's
one
thing
that
is
also
pretty
expensive
is
just
we
have
to
evolve
from
v-0
to
v1
and
to
do
that,
we
have
to
serialize
it
uh-huh.
D
C
C
C
D
D
C
B
B
B
F
B
For
v1
protobuf
for
1.4,
it's
3%
faster
and
for
the
plus
1.5,
it's
42,
it's
more
than
40%,
faster
I.
Think
the
things
with
it
did
help
improve
the
performance
and
for
the
view
on
JSON.
The
same
stories
here
so
1.4
and
1.3
are
mostly
the
same
and
for
Post
1.5.
B
C
D
D
H
C
H
B
G
D
You're
saying
the
fact
that
we
got
feedback
from
the
community
and
that
we
acted
on
it
and
we
improved
it
is
in
itself
a
good
thing
to
share
with
the
community,
I
mean
the
block
which
doesn't
have
to
be
like
anything
fans.
It's
probably
like
couple
paragraphs
and
then
the
graphs
put
in
there
wouldn't
be
anything
good
enough,
like
hopefully,
shouldn't
take
more
than
like
a
day
to
write
it.
So
if
you
have
time
so.
D
B
D
I
think
it
stated
the
biggest
culprit
in
our.
So
if
get
stayed
is
faster,
everything
else
would
be
also
faster,
like
the
improvement
numbers
might
look
different,
but
yeah
it
is
the
one.
That's
mostly
used
a
lot
of
people.
Okay,
a
lot
of
tooling
out
there,
so
I
think
that's
what
most
people
care
about.
D
D
E
Actually
we
just
cut
over
from
this
year
to
would
be
one
pretty
much
two
weeks
ago,
completed
cut
over
I
mean
so
I
think,
there's
number
yes
in
production,
the
number
of
observing
matches
this
repair
benchmark
almost
exactly,
but
we
didn't
do
please
spread
far
apart
from
like
ad
hoc
testing
seems
like
Viva
and
protobuf
is
impaired,
similarly
to
be
zero
to
a
song,
and
we
didn't
really
try
view
entry.
So
we
just
know
it's
much
slower.
E
C
C
C
D
I
I
E
A
I
D
D
C
C
Well,
I
think
the
only
thing
other
thing
to
discuss
is
you
guys
can
see
the
agenda
right,
yeah,
okay,
the
only
other
thing
to
discuss
is
the
some
discussions
that
we've
had
about
serving
state
off
of
a
different
actor
than
the
master
by
you
know.
Dog
fooding
are
streaming
API
that
we
provide
to
operators
in
order
to
stream
state
into
that
different
actor,
and
the
goal
would
be
firstly
to
take
the
look,
take
the
interference
off
of
the
master.
C
So
if
people
are
really
hammering
with
state
requests,
it's
not
spent
the
Masters
not
spending
time
serving
those
that
a
different
actor
would
be,
and
the
second
objective
would
be
to
also
yield
some
performance
improvements
just
because
we
would
have
a
protobuf
object
ready
to
go
rather
than
having
to
to
copy
it
from
the
V
0
state
and
evolve
it
to
V,
1
and
so
on,
but
there
hasn't
been
any
work
there.
Yet
I.
Think
Ben
H
has
some
house
like
a
branch
where
he's
been
hacking
on
it,
but
it's
not
it's
not
something.
D
I
think
to
me,
like
a
little
weird
part
about
that,
is
that
only
state
is
the
only
thing
that's
being
served
up
by
a
different
actor.
It
almost
sounds
like
to
be
better
if
there
is
an
api
actor
that
serves
all
api
requests,
they're
not
served
by
the
master
itself.
Just
doing
the
get
state
on
that
actor
seems
a
bit.
I
know
arbitrary,
well,.
C
D
C
D
D
C
It's
a
good
point.
We
could.
We
could
be
doing
that
as
well.
I.
Think
probably
the
focus
was
just
on
this
stuff,
because
it's
the
most
expensive
stuff,
but
yeah.
We
could
have
a
level
of
indirection
there
where
there's
like
an
actor.
That's
doing
some
of
them,
some
of
the
extensive
stuff
like
deserializing
things
before
it
hits
the
master
yeah.
D
C
E
C
So
what
what
been
H
actually
did
in
his
branch
was
he
had
the
you
I
used
the
streaming
API,
but
I.
Don't
think.
That's
actually
gonna
help
people
with
large
clusters,
because
the
initial
response
on
the
Streamy
API
is
still
so
big,
so
I
think
what
we
be
doing
instead
is.
If
we
had
something
like
graph
QL,
where
you
could
query
for
a
subset
of
the
state,
we
would
probably
for
now
just
leave
the
UI
polling,
the
master,
but
we
would
have
it
specifying
that
it
doesn't
need
tasklabels,
it
doesn't
need.
C
E
H
H
C
E
E
My
problem,
so
once
you've
at
that
have
is
we're
actually
holding
off
giving
medals.
You
know
why
to
to
the
internal
engineers
at
Wilbur,
mostly
limited
our
team,
because
it
it
can
be
slow.
They
can't
even
slow
down
the
master
itself.
Yes,
oh,
but
sometimes
sometimes
we
even
you
know
a
customer
wants
to
use
this
as
a
cross-reference
chat
about.
What's
going
on
in
the
class
ever
yeah.