►
From YouTube: GMT 2018-08-15 Performance WG
A
A
A
A
A
A
A
A
A
B
Okay,
so
can
I
see
my
screen,
yeah,
okay,
so
yeah
I
think
the
reason
I'm
trying
to
do.
B
If
you
check
the
mount
table,
it's
gonna
be
super
expensive
and
I'm,
trying
to
change
that
on
trying
to
not
do
the
check
and
do
the
check
in
the
color
like
basically
like
in
isolate
the
creation
stage.
We
do
all
these
checks
and
we
don't
do
any
check
to
receive
your
read,
for
example,
getting
stats.
B
So
so,
in
order
to
do
down
to
have
some
benchmark
at
least
I
can
see
the
performance
improvement
after
doing
that,
so
that's
the
motivation
for
creating
a
benchmark
for
this
and
I.
Think
it's
in
general,
useful
for
us
to
understand
the
performance
bottleneck
in
the
continual
launch
path
or
destroy
is
half
so
I,
create
this
benchmark.
I
haven't
committed
it
yet
because
I
have
to
receive
any
review,
but
I
think
it's
ready
to
go.
B
The
bench
of
our
self
is
pretty
straightforward.
Right
now,
I
have
a
bunch
of
parameters
right
now
on
one.
The
parameter
is
how
many
container
you
wanna
launch
on
the
agent
box
and
then
the
other
one
is
without
image
or
with
the
image.
All
the
reason
I
have.
This
parameter
is
because
that,
if
you
have
a
container,
then
you
have
additional
entries
in
the
mountain
table
for
the
roof
assets
amount.
B
So
so
that's
when
I
compare
my
parameterize
this
thing
at
this
moment
there
might
be
something
else
that
we
want
environment
parameterize
later,
so
it
should
be
easy
to
do
so.
One
thing
I
realized
that
if
you
want
to
run
this
benchmark
on
your
box,
you
have
to
change
some
of
the
resources
limit
on
your
machine,
for
example,
only
different
to
increase
the
open
file
limit
to
some
bigger
number.
B
If
you
want
to
run
like
a
thousand
and
also
I,
also
realized
that
you
have
to
increase
this
estimate
max
connection,
mom
parameter
in
the
kernel,
because,
right
now
the
default
value
is
100,
turning
eight
by
default.
So
that's
basically,
when
you
call
listen
system
call
and
that's
the
back
lot
size
and
if
the
back
lock
size
is
only
128,
you
don't
have
a
way
to
so
it's
sometimes
you
get
like
connection
reset
by
peers
there,
when
you're
trying
to
connect
to
the
agent
endpoint.
B
So
we
have
to
increase
this
number,
but
I
think
one
of
the
option
is
trying
to
kind
of,
although
set
it.
You
know,
set
up
on
text
fixture
and
then
reset
it
back
to
the
normal
value
after
this
is
done,
but
I
think
I
just
captured
as
a
notes.
Right
now,
the
benchmark
itself
is
kind
of
simple
I
use,
all
the
Isolators
that
we
using
DCOs,
but
I
think
this
is
kind
of
the
representative.
B
A
set
of
Isolators
are
people
are
using
right
now,
I
just
want
to
mimic
the
production
environment
and
the
bench
Marcel
is
very
easy.
I
didn't
used
on
the
framework
API,
because
I
want
to
eliminate
the
this
idea
of
launching
executors
and
doing
all
these
status
updates.
So
I
directly
use
the
agent
API
to
launch
containers,
launching
standalone
containers
by
using
the
v1
API
and
specify
some
minimal
resources
and
I
didn't
have
right
now.
I,
don't
have
a
way.
B
I
didn't
I
need
the
work
to
kind
of
exercise
most
of
the
isolator
features,
but
that's
kind
of
the
future
work
I
plan
to
do,
and
then
you
just
to
launch
the
container
a
new
way
for
on
those
continue
to
be
fully
launched.
I
mean
the
continuing
won't.
You
won't
receive
a
response
until
the
container
is
launched
in
the
anyway
for
those
container
to
determine
it.
B
You
kill
those
container
and
then
you
wait
for
the
the
termination
to
finish,
and
then
you
report
at
how
many,
how
much
content
does
it
take
to
launch
all
these
containers
at?
How
long
does
it
take
to
destroy
all
these
containers
so
I
have
some
data
I
ran
once
out.
A
thousand
containers
I
observed
that
that
we
can
launch
a
thousand
container
on
a
GPU
box
in
56
second,
and
also
I,
captured
a
flame
scope
grasp
for
on
floor
for
the
benchmark.
I
didn't
spend
time
analyzing
it,
but
something
that
caught
my
eye.
B
A
We've
seen
that
before
and
I
think
Meng
was
mentioning
that
that's
actually
perfect
and
if
okay
yeah
like
if
you
look
that
up
it've
right,
if
it
you'll
get
some
perf
discussion
around,
why
that
shows
up
and
so
on,
I
mean
there's
a
perfect
Tuscan
thing
right:
there,
yeah,
okay,
I'm,
not
sure
if
it's
actually
affecting
the
performance
of
the
agent
itself
or
if
it's
just
kind
of
like
noise,
I,
don't
know
but
yeah,
it
seems
to
be
related
to
perf
itself.
Yeah.
B
A
B
C
B
100
yeah
in
60
seconds,
okay
anyway,
so
I
think
we
party
to
part
need
to
do
more
on
evaluation,
I
I'll
collect
more
traces
and
so
that
we
can
analyze.
Those
and
and
I
continue.
I
mean
on
a
sidetrack.
Also
working
on
the
C
group
improvements
you
remove
those
verify
function
to
like
we
don't
check
non
table
anymore,
so
I'll
see
how
much
performance
can
proof.
After
doing
the
the
secret
stuff,
yeah
I
see
right
now,
it's
not
too
bad,
like
50
seconds
for
a
thousand
container.
That's
like
50
60
minutes.
Second
per
container
yeah.
B
B
B
Mean
but
the
reason
I
collect
this
purpose,
because
there's
one
time
I
run
the
benchmark.
I
realized
that
tiered
honest,
so
taking
so
long
and
that's
one
guy
I
want
to
do
some
perf
to
to
see
why
it's
taking
so
long
to
do
the
teardown
and
that's
the
graph
I
got
I
haven't
actually
I
haven't
trying
to
collect
the
perch
before,
like
a
normal
execution
that
finished
normal.
B
C
B
C
B
B
C
You
don't
know
the
jitter,
for
example,
so
usually,
if
like
if
I
was
trying
to
benchmark
this
kind
of
things,
I
would
basically
do
it
and
I'll
do
it
in
a
loop
right.
So
if
you're
trying
to
benchmark
okay
at
disc
IO
or
Network
IO,
you
start
requests
at
a
particular
rate,
and
then
you
measure
the
throughput
for
requests
so
there
now.
C
B
Mean
it
yeah,
I
think
I
initially
I.
Consider
this
one
but
I.
Think
the
downside
for
down
is
you
to
have
concurrency
like
you?
Don't
have
like
continued
launch
concurrently,
I,
don't
act!
I
do
want
to
I.
Think
one
of
the
ball
tonight
that
we
observe
is
actually
like.
You
have
a
lot
of
mountable
entries
in
the
system
and
if
you
do
that
in
serial
order,
then
you
won't
be
able
to
trigger
this
condition
of
having
too
many
mount
tables.
Well,.
C
C
B
C
Rather
than
killing,
you
just
have
a
container
exit
zero
so
that
that's
so
that
you're,
you
don't
have
to
block
on
the
kill,
but
basically,
instead
of
having
a
single
launch
in
cycle,
you
have
you
measure
over
a
fixed
time
window.
So
you
say:
I'm
measuring
a
single
container
launch
and
you
measure
health
and
you
just
keep
lon.
You
just
launch
the
container
in
a
loop
over
three
minutes
and
then
you
measure
over
that
three
minutes
say
how
many
times
did
I
launch
one
container.
B
C
C
Goes
up,
you
can
measure
okay,
two
three
four
say
well:
I'm
trying
to
measure
say
and
then
what
your
workload
is
is
is
spawn
his
launch
five
containers
or
watch
ten
containers
and
you
know,
keep
launching
them
and
as
they
exit
keep
keep
relaunching
them
for
your
time
window
and
then
for
your
time
window.
Then
that
gives
you
that'll
still
still
give
you
like
a
number
of
mantras
per
second.
So
that
gives
you
something
that
you
can
compare
over
time,
but.
B
C
So
if
you
have,
if
you
have
turn
you
launch
them,
okay
and
then
as
they
exit,
you
relaunch
them
uh-huh,
and
so
that
will
give
you
that,
basically,
that
will
give
you
the
rate,
the
launch
rate
per
second
and
you
measure
measure
if
ever
a
fixed
window.
That
gives
you
things
that
gives
you
like
results
that
you
can
compare.
So
it's
slightly
different.
It's
gonna
be
a
slightly
different
test
than
what
you're
doing
here
right,
yeah,
yeah.
B
I
think
that
sounds
reasonable
to
me.
Yeah
I
can
another
one
like
maybe
just
write
you
and
see.
You
know.
I
think
I
think
that
the
thing
that
I
want
to
click
here
that
I
haven't
collect
is
like
the
like,
as
you
mention
like.
What's
the
latency
like
the
timing
issue
from
the
time
you
issue
the
launch
to
the
time
the
containers
are
launched
that
latency
I
didn't
collect
that
information
right
now.
It
would
be
nice
to
know
like
dying
information
to
like
how
long
does
it
take
from
launch
to
actually
being
lost
yeah.
B
A
B
Way,
I
collect
two
times
and
launch
time
to
destroy
time,
so
launch
time
is
I,
think
start
from
I
issued
a
request
to
agent
and
we
do
that
in
a
tight
loop,
just
issue
as
fast
as
I
can
and
then
wait
on
the
responses
and
away
until
all
these
response
are
received
and
they're,
okay,
okay,
so
that's!
Basically
all
the
launch
has
been
process
it'll,
actually,
the
process
being
forked
that
the
user
promise
has
been
worked
and
what.
B
Do
you
happen
to
have
the
numbers
for
the
other
parts
like
the
killing
and
yeah?
So
that's
where
you're,
the
number
so
I
start
I.
Wait.
I,
didn't
capture
that,
but
I
think
I
went
when
I
start
to
issue,
kill,
I,
started
watch
I
watch
clock
and
then
wait
until
everything
is
killed.
So
that's!
Basically
how
long
does
it
take
to
destroy
all
these
containers
and
was
that
how
does.
B
B
C
A
B
B
A
B
I
think
thought
depends
I
think
for
most
of
the
agent
configuration
in
the
system,
the
unit
you
said
you
limit
to
be
I
mean,
but
at
least
open
file
limit
to
be
unlimited,
unguarded
for
this
one.
Yes,
I
think
we
might
run
into
this
because
we
never
changed
this
and
actually
I.
Think
that's
the
hunter
I
think
you
know
they
I
reach
out
to
you
on
this
that
right
now,
it's
just
a
list.
Backlog
number,
it
just
doesn't
make
sense.
B
Yeah,
so
right
now
the
number
is
5,000,
sorry
50,000,
sorry
500,000,
which
doesn't
really
make
sense,
because
the
the
colonel
just
kept
this
tube,
like
128
I,
don't
know
why
we
spent
specified
that
pick
number
here
during
lesson,
but
yeah
I
think
we
might
run
into
this
issue.
If
there's
a
lot
of
concurrent
connections
to
the
server
which
the
master
might
be.
My
head,
this
issue
about
the
agent
Walker
that
you.
B
I'm
not
sure
we'll
be
doing
the
master
I
think
we
reuse
the
same
code
so
in
my
occasionally
guy
like
if
you
have
a
lot
of
agent
trying
to
connect
a
master
simultaneously
and
and
I
suspect
that
we
might
run
into
some
reset
by
your
second
action.
Like
yours,
I,
don't
know
like
notion
if
you
guys
run
into
that
issue
or
not,
but
it's
pretty
easy
to
trigger.
If
you
have
a
lot
of
concurrent
connections.
B
B
A
A
A
A
A
C
B
C
B
I
think
we
should
definitely
do
that.
Yeah
I,
think
to
me
I
think
we
should
definitely
do
that
and
we
can
chat
more
I.
Think
I
was
good
on
this.
It's
not
alright,
but
yeah
I
was
go
over
on
this
and
I.
Think
that's
the
thing
that
should
definitely
do
I,
don't
have
any.
We
don't
have
any
metrics
on
that.
B
I'm,
just
thinking
like
what
kind
of
metrics
it
will
be,
is
there
like
a
what's
that
caucus
yeah.
B
Is
there
a
way
to
collect
I,
say:
hey
I
want
to
just
okay
I,
see
I'm,
just
thinking
like
what
we
should
do
to
get
that
information.
So
there's
a
timer,
that's
gonna
use
by
the
container
riser
to
do
this
and
each
time
each
time
you
launch
a
container
you
inject
somebody,
but
that
window
doesn't
make
sense.
I,
don't
know
what
that
window
means
the.
B
A
B
A
C
Yeah,
that's
why,
like
I,
don't
know
for
for
this
kind
of
thing
it.
It
almost
feels
like
a
logs
problem,
because,
if
I
have,
if
I
have
you
know
a
set,
oh
five,
a
log
which
contained
to
start
up
latency,
then
I
can
look
at
it
for
the
last
six
months
and
I
can
make
the
p99
over
whatever
time
window
I
want.
Whatever
tell
me
what
I
want
within
that
six
months
right,
so
you
have
a
lot
more
flexibility
when
you
have
discrete
events
for
for
everything.
A
A
A
C
B
C
You
have
a
cat
if
you
have
a
count
of
seconds,
if
you,
if
you
launch,
if
you
launch
time
it's
a
count
of
seconds,
then
you
can
plot
that
break
converted
into
Prometheus,
which
will
show
you
that
it's
doing,
but
it
won't
give
you
kind
of
it,
won't
really
help.
You
understand
per
container
latency
yeah.
A
C
B
A
A
A
What's
been
happening,
there
is
the
initial
patch
which
is
parallel
state
serving
for
v-0
landed,
so
that's
going
to
be
in
1/7
there's
more
work
to
do.
We
want
to
eventually
get
all
the
reads
to
be
done
this
way,
but
right
now
only
the
slash
state
API
is
done
this
way.
So
there's
more
work,
that's
going
on
there
we're
not
done
yet.
A
The
next
thing
was:
oh
yeah,
there's
one
more
thing
here,
which
was
eliminate
a
double
dispatch
from
authorization.
So
right
now,
authorization
is
actually
done
on
the
master
actor,
which
means
events
have
to
trip
through
the
Q
ones,
to
get
to
the
authorization
part,
and
then
they
have
to
trip
through
again
with
authorization
completes
and
authorization
is
just
for
this
principle.
What
are
the.
A
Give
us
back
the
object,
authorizers
object,
approvers
and
that
can
be
done
outside
of
the
master
actors.
So
Bano
is
looking
into
moving
that
outside
and
it's
a
bit
of
a
broader
change,
because
if
we
probably
want
to
move
all
the
end
points
so
that
their
validation
and
so
on
can
be
done
outside
the
master
actor.
Before
we
go
in
to
the
masters
q.
A
A
A
What
else
did
I
have
here?
There's
some
there's
still
allocator
work
being
done.
We've
been
kind
of
sidetracked
because
of
the
Visia
authentication,
scalability
problems,
but
I
think
the
main
you
can
correct
me
if
I'm
wrong,
but
I
think
the
main
thing
left
here
to
do
was
the
copy-on-write
resources
patch,
because
we
found
that
it's
still
the
case
that
copying
is
the
dominant
part
of
the
time
spent
in
the
allocator,
especially
when
resources
are
like
fragmented
ports
or
things
with
a
bunch
of
labels.
A
A
C
C
A
I
mean
I,
guess
I
I
we
should
probably
so
I
guess.
The
first
thing
is
you
know,
even
if
we
didn't
make
it
a
default,
a
bundled
thing.
We
certainly
need
to
make
it
a
lot
easier
for
people
to
use
it
and
imagine
that
we
would
probably
try
to
still
provide
it
to
users
and
what
that
would
mean
is
like
really
releases
like
DCOs
would
probably
just
make
that
choice
for
them.
Yeah.
C
A
C
B
C
A
C
C
C
C
So
the
memory
profiling
was
really
quite
a
nice
piece
of
work.
He
looks.
He
looks
for
the
symbol.
He
just
looks
for
the
J
Malik
singles
dynamically
and
uses
them
if
they're
present.
So
the
memory
profile
works,
if
you
run
under
J
Malik
and
doesn't
require
you
to
actually
link
with
it
at
Build
time.
Yeah.
C
So
that
was
my
understanding
for
Proposal,
like
if
you
and
I,
if
you,
if
you
enable
it
at
build
time,
then
you
you're
actually
linking
linking
against
it
at
build
time,
which
means
that
live
meeting
would
be
live.
My
sauce
would
bring
it
in
which
means
that
you'd
end
up
with
J
Malik
in
everything
that
links
to
live.
My
sauce
and
a
bunch
of
stuff
links
to
a
bunch
of
stuff
can
potentially
link
to
live
me.
Sauce
that'd
be
some.
You
know
things
that
we
ship
as
part
of
the
project.