►
From YouTube: GMT 2018-03-21 Performance WG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
A
A
A
A
A
C
C
B
D
B
B
D
B
B
So
the
first
thing
I
wanted
to
just
discuss
quickly
was
I
created
a
dashboard
I.
Guess
it's
a
scrum
board
or
a
Kanban
board
I
forget
which
is
which,
but
this
should
be
visible
to
everyone.
It
basically
just
matches
the
performance
label.
So
if
any
ticket
has
this
label
it'll
show
up
in
the
dashboard
and.
B
B
Amira's
email,
some
notes
from
this
meeting
just
asking
folks
letting
people
know
about
this
label
and
the
backlog,
and
you
know
encouraging
them
to
use
use
this
label.
If
they
know
if
I
need
tickets,
they
want
that
are
related
to
performance.
B
D
B
B
B
B
B
C
May
be
just
like
a
casual
comment
is
I
apologize
for
not
doing
a
thorough
research.
They
are
basically
after
I
saw
the
patches
landed.
I,
just
kind
of
out
of
curiosity,
I
kind
of
just
ran
our
existing
reconciliation
test
with
conciliation
benchmark.
We
have
the
one
I
mean
the
we
won't
be.
One
patch
is
now
endedness
yet,
but
the
benchmark
didn't
work
so
I,
then,
okay,
random,
V,
zero
benchmark,
which
didn't
show
improvement,
selves
didn't.
B
B
B
Like
we
did
with
the
master
failover
patches
when
there's
like
obvious
performance
improvements
like
we're
just
doing
fewer
copies,
I
didn't
you
know,
ask
people
to
write
a
benchmark
before
they
committed
those
knowing
that
we
would
write
one
and
have
the
data
available
later
so
like
for
all
of
these,
we
can
have
those
benchmarks.
We
already
have
one
for
reconciliations,
but
it's
v-0,
so
we
could
have
a
v1
specific
benchmark
that
we
can
run
and
we
can
see
that
you
know
in
1:6.
B
B
There
was
an
unfortunate
pattern
in
b1,
where
v1
tended
to
in
this
particular
call.
It
would
basically
copy
all
the
data
into
a
V.
Well,
it
would
copy
it
all
into
like
a
common
interface.
So
what
I
instead
tried
to
do
is
just
have
v1,
be
what
the
what
the
interface
uses
and
then
VZ
removes
into
that.
B
A
B
B
Like
there's
already
one,
for
example,
for
reconciliation
but
I
think
that
one's
actually
kind
of
dumb
it
just
it
just
doesn't
do
anything.
It
just
sends
a
bunch
of
messages,
and
it's
kind
of
it
was.
It
has
kind
of
been
used
as
a
as
a
more
general
message
passing
throughput
benchmark,
then
than
being
a
you
know,
reconciliation
related
benchmark
it
like
the
closest
thing
we
have
to
just
measuring
the
message
throughput
just
passing
through
because
reconciliation,
it's
just
gonna
like
look
that
up.
B
B
Thousand
eight
or,
like
you
know,
fifty
thousand
agents
in
one
process,
so
Ian
had
written
just
a
fake
agent
for
the
benchmark
that
would
reply
or
that
would
send
send
the
registration
message.
So
it's
spoofing
part
of
the
protocol.
B
C
While
and
that's
totally
different
learn
so
there's
a
dashboard,
the
four
performances
in
general
and
I
was
just
wondering
because
I
I
didn't
spend
too
much
time
like
thinking
reasoning
about
various
places
that
could
benefit
from
cost
a
copy,
reducing
reduction,
I
I'm
kind
of
just
like
wonder.
There
must
be
other
places
right,
yeah
for
sure
like
if,
if
people
have
thought
about
things
and
and
they're
out
of
here
and
then
for
me,
if
I
have
like
Sun
and
time
on
the
side,
I
can
I
could
pick
up
something
just
for
the
sake
of
you.
C
B
B
B
C
B
C
B
B
Structure
of
the
offers
that
can
probably
be
moved
back
to
the
master,
because
I
think
the
allocator
throws
this
away
same
thing
for
this
inverse
offer
stuff.
But
not
a
lot
of
people
use
that
right
now,
then
you
have
all
the
calls
that
the
master
makes.
So
that's
calling
back
to
the
master.
Then
you
have
all
the
calls
the
master
makes
to
the
to
the
allocator.
B
B
C
B
B
B
B
B
B
Most
of
them
at
this
point,
I,
that's
kind
of
what
I
was
doing
was
just
looking
at
all
the
installs,
and
then
I
noticed
that
the
replicated
log
has
some
as
well
and
the
main
thing
there
was
that
it
could.
It
could
move
the
binary
data,
not
sure
how
much
of
a
win
that's
gonna
be
given
that
I
assume
it's
pretty
fast
to
copy
like
a
big
binary
string,
but
it
should
help
a
little
bit.
B
B
The
allocator
stuff
might
be
pretty
straightforward
as
well,
so
that
would
be
good
if
someone
else
wants
to
look
at
that.
That'd
be
great,
otherwise
I'll
see
if
I
can
take
a
look
over
that
before
the
next
meeting.
B
B
B
So
if
we
had,
let's
say
we
got
our
first
operation,
we
go
into
a
right
now,
while
we're
doing
a
right
thousand
operations
came
in,
we
queue
those
up
that
first
right
finishes.
We
look
at
the
queue
and
we're
like
okay,
great
there's,
a
thousand
things
this
time.
So,
let's
process
all
of
those
in
memory.
B
B
Many
of
those
things
trigger.
One
of
these
expensive
operations,
which
is
an
allocation
cycle
so
do
when
operation
comes
in
the
triggers
a
cycle
is,
will
defer
that
cycle
behind
all
the
other
events
in
the
queue
allocator
itself,
like
the
actual
event
queue
of
the
process,
so
that
next
time,
when
we
finish
that
cycle,
we
when
we.
B
B
B
B
B
B
B
B
So
that
first
thing
makes
sense
to
folks.
The
idea
is
that
we
want,
if
you
look
at
another,
if
you've
got
other
HP
libraries
that
have
a
kind
of
like
a
user
level,
threading
model
like
go.
What
happens
is
that
every
request
handler
is
invoked
in
a
go
routine.
That
gets
spun
up
for
that
request
handler
in
a
similar
way,
like
one
of
our
performance
problems.
B
Is
that
the
request
processing
part
of
it
is
done
by
Lib
process
and
parsing
the
message
and
all
that
happens
and
go
like
the
server
parses
the
message,
but
once
it's
parsed,
we
hand
it
off
to
the
handler
and
that
handlers
running
and
it's
blocking
the
master.
So
if
the
Masters
gonna
do
anything
like,
that's
doesn't
need
master
state
like
validate
the
message
or
like
deserialize
part
of
it
or
whatever,
that's
all
done
on
the
master,
whereas
it
could
be
done
on
a
on
an
independent
actor.
B
That's
running
in
parallel
with
a
bunch
of
other
actors.
So
we
want
to
try
to
get
some
of
that
initial
work
and,
like
the
basically
the
beginning,
the
request,
processing
and
the
end
of
the
request.
Processing
that
don't
need
to
happen
on
the
master
we
want
those
to
be
outside
of
the
master
actor
do.
D
B
It's
certainly
only
going
to
be
relevant
for
now
for
the
master,
because
we
just
don't
run
other
actors
at
such
a
high
load
as
the
master
for
the
master.
When
we
ran
benchmarks,
a
significant
cost
is
in
the
serialization
and
validation
of
messages.
B
These
serialization
tends
to
not
need
the
master
related
state
at
all
and
validation.
Much
of
it
is
a
stateless.
Some
of
its
stateful
like
it
needs
to
look
at
the
Masters
data
structures
to
see
if
this
makes
sense.
But
some
of
it
is
just.
Is
this
thing
well-formed
and
all
that,
and
we
have
to
traverse
the
message
and
look
at
things
to
do
that
so
those
pieces
if
we
could
have
the
master,
be
deserializing
all
the
incoming
messages
in
parallel
across
the.
B
B
And
so,
if
you
can
do
that,
DC
realization
in
parallel,
rather
than
serially
on
the
master,
you
spend
less
time
in
the
master,
just
not
doing
anything
other
than
destabilization,
and
you
can
get
a
little
bit
of
like
dolls
law
and
speed
up
the
parallel
part
of
of
the
flood
of
incoming
messages.
B
Deserializing
prone
enough,
so
it's
got
a
binary
string
which
is
coming
in
the
HP
body
and
it's
got
a
turn.
It's
got
a
construct,
a
C++
kind
of
off
objects
from
that
in
the
case
of
v1.
It's
also
has
to
do
some
expensive,
evolving,
devolving
rather
from
the
v1
incoming
binary
data
into
a
V.
Zero
C++
object.
So
in
order
to
do
that,
it
will
go
through
v1,
C++
serialize
that
again
and
then
DC
lies
that
as
V
0
c,
plus
plus
I.
A
B
B
Like
if
you
look
at
the
routing
API
in
the
process,
the
routes
are
bound
to
actors
which
was
an
unfortunate
decision,
because
if
you
want
to
change
an
actor
name
or
like
move
things,
refactor
things
and
you
no
longer
have
a
certain
actor.
Your
routes
are
kind
of
stuck
with
those
names
in
them,
and
you
have
to
like
fake
that
there's
an
actor
there.
It's
the
API
is
also
something
that
I
wanted
to
just
improve,
but
yeah
I
mean
you're
right.
You
could
still
go
through
the
master
serially
initially
and
sequins.
All
that
work.
B
You
probably
need
to
do
some
sequencing
there
to
make
sure
that
it's
still
ordered
correctly
and
everything.
But
it's
that
it
just
kind
of
seems
to
me
to
be
a
little
more
complicated
and
approach.
We're
like
handlers
are
freestanding
and
they
call
into
things
when
they
need
to
go
to
a
specific
actor.
B
B
B
B
B
B
It's
just
there's
that
code
is
it's
sequencing
everything
and
it's
doing
additional
dispatches
and
I.
Don't
know
that's
pretty
expensive,
so
I
haven't
when
I
looked
at
it.
I!
Didn't
it's
not
obvious
to
me
if
it's
gonna
be
easy
to
make
that
faster
and
on
it's
possible
that,
like
I'd,
have
to
investigate
how
other
libraries
or,
like
maybe
an
NGO,
how
they
do
authorization
for
authentication
yeah,
one.
B
C
D
D
B
He
ran
a
the
reckon
station
benchmark
if
I
remember
right.
He
had
shown
me
a
b1
version
of
it
where
he
said
that
this
this
additional
code
here
Asian
or
authorization,
was
what
made
it
a
lot
slower
like
2x
slower,
something
like
that.
B
D
D
C
D
B
B
Because
when
I
was
looking
at
some
of
the
master
code,
there
were
some
cases
where
there's
basically
a
Const
loop
over
a
lot
of
things.
One
example
is
reconciliation,
we're
going
to
do
a
loop
over
all
incoming
tasks,
I
want
to
be
reconciled
and
the
logic
inside
each
loop
iteration
is
basically
constant
against
the
master.
B
It's
just
going
to
look
up
the
task
and
so
on,
but
the
sheer
number
of
tasks
in
there
like
four
large
schedulers
when
it's
a
hundred
thousand
tasks,
it's
to
actually
run
the
loop,
and
so
it
would
be
nice
to
have
something
like
a
parallel
when
we
need
it
when
we
have
code.
That's
that
fits
this
pattern
to
be
able
to
speed
the
that
kind
of
code
up,
there's
a
lot
of
subtlety
to
discuss
with
something
like
this
and
there's
also
the
C++
Perl.
B
B
When
I
was
thinking
for
this,
is
that
we'd
probably
start
with
just
adding
something
to
lib
process
that
uses
our
processes
to
achieve
this
and
make
sure
it
doesn't
deadlock
and
so
on,
but
longer-term
not
sure
like
it
might
make
sense
for
the
process
to
integrate
more
with
the
stuff
that
gets
standardized
and
C++.
When
it
comes
to
parallelism.
B
B
B
Okay,
well,
if
that's,
it
can
end
a
little
bit
early
thanks
guys
for
training,
yeah.