►
From YouTube: Ceph Performance Meeting 2018-10-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right
I
mean
quickly
just
get
through
the
the
two
new
pull
requests
that
are
in
here.
We
seem
to
have
slowed
down
a
little
bit
this
week.
We've
got
one
new
one
from
Radek
that
improves
string,
processing
on
the
right
path
and
it's
I
think
a
relatively
small
change,
but
a
win
so
I
think
everyone's
in
agreement,
this
it
looks
good
and
there
was
also
just
a
little
bit
of
movement
on
this
lebar
BD
shared
persistent
read-only
RBD
cache
work.
A
It
sounds
like
maybe
there's
some
new
code
coming,
though
we'll
see
about
that.
Otherwise,
a
couple
of
folks
have
reviewed
it,
so
I
think
there's
probably
still
more
to
come,
but
yeah
that's
about
it.
Other
stuff
going
on
I
saw
anyway
this
week,
all
right!
Well,
let's
see
real-life
review.
Okay,
Nick.
How
quick
is
your.
C
Sure
right
do
you
have
that
on
your
screens?
Yes,
okay,
excellent,
so
pretty
for
a
couple
of
months
now,
one
of
the
clusters,
either
grades
blue,
store,
I,
know
you
guys
always
really
interesting.
How
things
are
working
outside
of
I,
guess,
benchmarks
and
stuff,
so
I
thought
I
just
saw,
gives
him
feedback.
C
So
the
good
is
everyone
see
if
you
really
well,
women
like
hitting
any
bugs
on
a
Finn
we're
getting
about
one
and
a
half
times
compression,
which
is
really
good,
easy,
overwrites,
aw,
I'm
good
wanted
to
take
more
advantage
of
that.
Our
SSD
predictive,
where
it's
completely
changed
course,
which
is
really
nice,
because
some
of
them
were
starting
to
show
that
they
might
need
to
be
replaced
before
we
thought
they
were
SS.
The
sort
of
only
SSD
pools
are
performing
much
better
due
to
no
double
rights.
C
That's
been
really
good
and
there
hasn't
been
any
fun.
In
inverted
commas,
with
all
the
caching
entries
with
large
discs
and
stuff
that
you
have
with
our
store.
I
also
suspect
that
snapshots
seem
to
be
a
lot
better
with
our
BD
I'm
guessing,
because
police
does
doing
something
more
efficient
under
the
hood
than
copying
forming
files
between
folders.
C
So
that's
all
good
stuff.
The
bad
I've
seen
some
soar
quite
poor
performance
in
some
use
cases
and
which
is
of
why
I
wanted
sort
of
light
highlights
on
this
today
to
just
get
some
sort
of
feelers
out
there
for
people
think
so
from
what
I
can
see.
The
deferred
writes,
which
is
what's
meant
to
write
objects
into
the
SSD
on
Bluestar
and
then
sort
of
commit
to
disk
later
doesn't
seem
to
apply
to
new
objects.
C
So,
for
example,
if
I
run
a
radio
spent
with
4k
objects
on
a
fast
or
cluster
versus
blue
store,
I'm,
seeing
a
difference
between
one
millisecond
average
latency
say
on
a
file
store
to
well
over
ten
on
the
blue
store.
Although
this
behavior
is
really
only
of
synthetic
because
during
of
real
life,
you
don't
really
get
to
see
this
a
lot,
but
that
the
whole
deferred
rights
for
not
new
objects
does
seem
to
hit
be
a
bit
of
a
performance
drop
I'm.
C
Pretty
the
biggest
thing
I'm
saying,
though,
is
we've
sort
of
mid-sized,
writes
so
they're
sort
of
a
64
to
maybe
half
an
egg
size,
so
we're
pulsed
or
did
a
really
good
job
of
sort
of
coalescing.
These
and
sort
of
buffering
them
in
in
the
journal
and
then
writing
them
out
to
disk
in
an
efficient
manner.
We've
blue
store
once
you've
over
that
deferred
right
threshold,
everything's,
basically
relying
on
their
speed
of
the
underlined
disc,
which
in
my
case
is
7.2.
C
You
know
multi
Meg,
size,
I/o
and
just
doing
one
right
there
later
date.
Large
writes,
though
pretty
pretty
much
okay.
So
if
you
really
try
and
do
a
synthetic
benchmark
with
a
hard
disk
and
an
nvme
journal,
obviously
the
nvme
journal
is
sort
of
zorbing
those
rights
faster,
but
journal
me
use
once
you've
get
you
know
more
more
I
owe
more
to
death,
it
pretty
much
levels
out
and
it's
actually.
Okay
and
the
only
other
thing
is
spare.
Memory
is
no
longer.
C
C
The
deferred
right
threshold
just
be
increased
to
just
to
cover
some
of
these
slightly
larger,
writes,
but
I,
don't
know
how
well
just
blue
steel
handle
pushing
more
IO
through
the
insert
into
the
database.
You
know
larger,
writes
more
more
frequently
of
them.
Do
we
get
back
into
the
situation
where
SSD?
Where
becomes
a
problem
again,
I
know,
as
I
sort
of
mentioned
on
the
first
slide.
C
Is
this
won't
apply
to
new
objects,
though
in
cases
maybe
for
likes
ffs
or
if
you're
copying
new
data
into
IBD
you're,
probably
still
going
to
hit
this
same
performance.
One
thing
I
did
think
might
be.
The
Savior,
though,
was
using
the
cache,
tear
and
write
back,
but
it
seems
that
new
objects
get
written
to
the
backing
pool.
C
So
if
a
new
objects
get
written
instead
of
getting
into
the
cache
chair
and
then
getting
flushed
down
later,
it
gets
written
directly
into
the
back
and
Paul
and
I
suspect
there
might
be
a
simple
if
that
can
be
added
somewhere
or
maybe
a
variable
to
control
that
somewhere
in
in
that
function
are
listed
there.
But
I,
don't
know
what
the
correct
thing
to
be.
Looking
for
to
say:
does
this
object
not
exist,
so
write
it
directly
to
the
cache
I?
Guess
it's
just
something
to
disable
the
proxy
right
behavior.
C
A
Maybe
one
one
question
for
you:
Nick
is
when
you
were
diagnosing
the
right
performance
between
64k
and
512
K.
How
did
like
what?
What
kind
of
Diagnostics
did
you
do
or
kind
of?
What
did
you?
What
kind
of
behavior
did
you
see
when
you
were
looking
at
that,
so.
C
What
I'm,
basically
saying
is
the
elephant
under
I
think
calm
wrap.
It
includes
six
four
or
is
you
know
like
sixty-three
and
under
that
gets
deferred
into
the
blue,
store,
DB
and
then
flushed
out
later.
Anything
above
that
sighs,
the
I/o,
gets
directly
written
to
the
block
device
and
doesn't
return
like
an
acknowledgment
until
that
is
complete.
So
on
the
case
of
a
slow
disc
that
that
means
the
load,
the
lens
have
the
right
of
rate
increases.
C
A
B
C
A
C
Yeah
I
mean
but
I
mean
that's.
What
I
was
really
interested.
The
cash
thing,
because
I
think
that
we're
almost
helping
a
lot
of
cases
because
it
would
be
hot
objects-
would
be
getting
going
to
the
the
cashier.
No
objects
would
be
going
there
and
then
everything
else,
including
recovery,
probably
wouldn't
be
hammering
so
a
beer
sort
of
a
midway
point
between
the
the
wearing
your
SSD
and
and
getting
the
performance.
Sure.
A
A
A
You
know
a
better
fit
right
on
on
own
ODEs
in
blue
store,
or
you
know,
oh
map
data
and
rocks
TB,
or
even
just
having
memory
left
over
for
data
for
like
read
cache,
but
I
think
we
would
need
to
be
really
careful
to
make
sure
that
once
we
start
giving
lots
of
memory
to
do
the
buffer
cache
in
in
blue
store
that
doesn't
make
like
trim,
events
start
being
really
slow
or
something
which
is
I,
think
maybe
the
the
concern,
especially
if
you
do
default,
buffered
right.
A
A
C
To
clarify
on
this,
the
workload
on
this
is
sort
of
you
know,
measured
in
the
hundreds
of
iOS.
It's
not
I
think
probably
what
you're
looking
at
Mars
when
you
actually
start
really
hitting
things
hard
and
you
start
saturating
several
other
components
inside
blue
stuff.
But
this
is
more
about
the
the
underlying
device
latency
being
exposed
through
to
the
client
yeah.
E
A
So
that
stuff
just
goes
into
the
buffer
cache
like
immediately
and
since
you're
your
rights.
Aren't
that
fast,
you
know
relatively
aren't
that
fast,
the
the
you
might
not
hit
the
mempool
thread
too
hard,
because
that's
where
all
those
those
trim
events
are
happening
and
I've
actually
seen
the
mempool
thread,
basically
topped
out
at
100%
CPU
doing
trim
before
I.
Don't
remember
exactly
what
the
workload
was
but
yeah.
It's
that
that
would
be
the
thing
to
I.
Guess
you'd
make
sure.
But
in
this
case
I
don't
know
that
would
hurt
you
too
bad
yeah.
C
G
E
A
A
A
H
What
about
me,
Half
Moon
is
my
exciting
work.
The
other
half
of
it
is
kisi
Babu's,
exciting
work
who
is
going
to
be
using
the
SEO
fide
liberate
us
interface,
who
make
the
radio's
gateway,
fantastic
and
wonderful
and
low
latency,
and
all
those
nice
things
that
we
hope
for
so
first.
A
brief
introduction
to
what
SEO
is
SEO
is
an
asynchronous
networking
and
IO
layer.
It
is
more
networking
than
it
is
IO,
because
non-blocking
I/o
under
many
UNIX
systems,
especially
live
inks,
is
horrible
and
garbage
and
awful.
H
But
that's
okay,
because
we're
a
network
system
and
one
of
them,
and
basically
its
pattern,
is
what's
called
a
reactor.
You
submit
a
bunch
of
requests,
the
requests
come
along
with
completions
and
the
completions
can
get
or
getting
executed
in
one
of
various
rules.
Basically,
every
ask
you
using
application,
is
going
to
have
an
I/o
context,
we're
just
sort
of
the
big
up
top
thing
that
manages
all
made
all
the
resources
and
has
a
reasonably
large
thread
pool
associated
with
it,
and
you
can
have
sort
of
sub
context
executors
associated
with
that.
H
What
like
Linux,
when
you
have
these
strand,
all
the
things
executed
in
that
strand
or
legal
one
of
them,
literally
crews,
that
only
one
thing
the
discussion
on
the
strand
is
going
to
be
executed
at
a
time.
Even
though
we're
a
task
in
the
Strand
is
executed,
may
vary
in
terms
of
threads,
you
can
sort
of
think
of
it
as
a
logically
enforced
single
thread
of
execution.
H
So
if
you
are
so
inclined
which
I
am
you
can
remove
a
lot
of
explicit
locking
that
involves
waking
threads
up
and
put
them
back
to
sleep
and
all
that
stuff
and
rephrase
it
in
terms
of
actions
which
are
basically
digital
in
critical
sections
on
strands,
just
sort
of
bring
them
all
family.
So
you
make
better
use
of
your
CPU
that
way
and
a
few
or
threads
clogging
everything
up
and
all
that
nice
thing.
H
Actually,
the
reactor
design
works
well
in
general
for
the
purpose
of
having
fewer
thread
pools
and
fewer
threads
in
general,
lying
around
I
know
that
we
have,
in
the
past,
had
lots
of
threads
and
maybe
used
more
than
we
would
like
to
and
would
like.
It
would
be
interested
in
using
less
of
them,
and
this
is
actually
one
way
of
going
around
that.
H
Basically,
the
fact
that
you
can
have
prioritized
or
other
organized
job,
well,
some
queues
and
sub
executors
inside
any
given
thread,
pool
basically
means
that
we
can
probably
subsume
a
lot
of
various
work
queue.
Functionalities
into
this
model
with
having
to
devote
explicit
thread,
pools
twelve
of
them.
H
H
A
couple
different
ways
of
doing
it
defer
basically
specifically
requires
that
what
you're
doing
now
finish
up
and
get
done
with
before,
whatever
task
you
would
scheduled
is
allowed
to
run,
let's
see
and
a
lot
of
basically
sitting
down,
making
a
condition
variable
waiting
on
it,
waking
things
up,
etc,
etc.
Cetera
is
really
not
a
thing
that
needs
to
happen
anymore,
since
all
that
could
be
subsumed
into.
H
Into
the
whole
lot
of
completion
notion,
completions
are
also
nice
in
that
they
vary
in
that
they
create
a
very,
very,
very
easy
way
of
turning
asynchronous
operations
into
synchronous
operations,
and
the
API
is
that
I
was
looking
at
well
swimming.
Well.
After
give
you,
the
overview
of
ASEA
in
the
API
was
like.
I
have
been
looking
at.
Designing
I
found
myself
not
actually
wishing
for
synchronous
versions
of
functions,
because
when
I
wanted
to
use
something
synchronously,
it
was
ridiculously
easy
to
just
pass
in
use
future
as
the.
H
As
the
completion
side,
token
token
pardon
me
as
the
completion
token,
and
that
just
gives
me
a
future
that
I
can
wait
on
bind
tuples
to
all
that,
all
that
good
stuff,
it's
very
nice,
it's
very
national.
It
also
works
for
their
world
co-routines,
which
I
have
not
actually
used.
All
that
much
but
Casey
has
I'm
sure
would
be
very
happy
to
tell
you
about
co-routines
in
general.
H
My
part
of
the
project
basically
has
two
goals,
essentially
because
Casey
was
blocked
on
waiting
for
one
of
them.
I
have
been
focusing
on
that
one,
and
also
it's
the
one
that
requires
the
least
debunking
efforts
was
the
easiest
to
get
out
of
the
way
first.
So
this
is
sort
of
a
two
part
thing.
What
is
that
we
would
like
a
version
of
liberate
us,
and
this
is
a
version
of
liberator,
so
does
not,
as
some
people
have
feared,
it's
just
opening
up
the
object
here
to
anyone
who
wants
to
look
at
it.
H
It
is
nicely
typer
raised
and
hidden
away
and
all
that
stuff,
but
our
goals
were
basically
we're,
basically
that
we
wanted
something
that
was
sort
of
natively,
asynchronous
and
natively
integrated
into
this
library
with
them
and
above
overhead.
We
would
like
to
avoid
doing
as
many
allocations
as
possible.
H
H
Handling
things
like
namespaces
locator
keys
that
sort
of
thing,
without
necessarily
having
to
spin
up
a
very
heavyweight
object
that
the
liberators
aisle
context
is
also
we
were
able
to
sort
of
save
and
gain
a
few
efficiencies
just
by
making
this
sort
of
experimental
interface
C++
only
I'm
not
really
worrying
about
see
that
we
have
less
in
the
way
of
requiring
dynamic
allocation
for
various
stuff,
and
we
get
to
do
a
whole
lot
more
make
it
possible
to
do
a
whole
lot
more.
Just
on
the
stack
as
a
result
of
that.
Well,.
G
As
well,
we
were
hoping
to
end
cap
Tilian
to
unify
all
of
the
different
thread
execution
contexts
in
rtw
into
one
unified
least.
That
was
wipe
all
these
other
things
are
true
too,
but
but
that,
but
in
one
but
that's
the
was
there
was
the
top.
There
was
the
top
level.
I
think
wanted
to
have
one
one
comments:
execution
context
for
a
request
for
dealing
with
all
of
its
asynchronous
states
in
synchronous
states.
Yes,.
H
H
Although
we
can
change
that
very
easily,
we,
since
the
user,
passes
in
a
an
I/o
context,
that's
found
that
is
bound
to
whatever
threats
he
wants.
The
user
can
tick
all
sorts
of
other
things
on
it,
and
whatnot
and
KC
will
probably
explain
some
of
that
design
since
he's
been
looking
at
beasts
and
how
to
go
about
making
better
use
of
it.
These
secondary
goals
for
me
were
to
take
advantage
to
take
advantage
of
some
of
the
efficiencies
provided
in
improving
the
object
er.
H
That
turned
out
to
be
a
really
really
horrible
idea,
because
it
made
doing
small,
pervasive
rewrites
of
everything,
my
responsibilities
in
debugging
them
and
this
sort
of
consuming
a
whole
lot
of
time,
and
well,
it's
just
an
awful
idea.
So
everything
is
now
much
more
backward
compatible
and
you
can
just
pass
in
context
and
everything
just
works
all
that
nice
things
I
don't
know
do
not
end
up
having
to.
H
H
On
things
like
the
OSD
session,
locks
I
have
been
looking
into
the
potential
to
replace
much
of
the
OSD
session.
Lock
work
with
a
per
OSD
stress
fashion,
strand,
so
all
those
sorts
of
things
and
what
depends
on
the
we
can
basically
just
the
scheduled
into
the
reactor
in
a
way
that
they're
guaranteed
not
to
interfere
with
each
other
rather
than
linking
up
and
going
to
sleep
over
and
over
and
over
again
of
blocking
threads
and
similarly
I
think
we
can
get
a
good.
A
reasonable
amount
of
parallelism
out
of
the
motor
changes
along.
H
That
regard
actually
have
a
couple
archaeological
/
prototype
branches.
With
some
more
of
these
changes
that
I
have
of
pulling
out
and
reverting
in
the
mainline
branch
to
so
I
could
get
the
sort
of
mainline
API
functionality
at
all,
but
once
I
get
that
debugged
and
acceptable
to
people
I'm
planning
on
rolling
some
of
these
sort
of
internal
efficiency
improvements
back
in
and
trying
to
make
them
available.
But
I
will
be
happy
to
take
questions
and
casey
can
also
talk
about
rvw
and
his
plans
for
them
if
he
wishes
and
depending
on
the
relative
order.
A
And
the
talking
about
threads
earlier
and
I
was
wondering
one
of
the
things
I.
Don't
really
know
that
much
about
ASIO
but
I've
seen
a
lot
of
mentions
about
using
different
models
for
how,
if
you're,
using
like
a
single
I/o
service
or
multiple,
like
one
I/o
service
per
thread
and
contention
at
the
reactor
level
to
what
what
kinds
of
issues
have
you
guys
run
to
her?
How
have
you
had
to
kind
of
design
things
at.
I
I
This
oh
go
ahead
to
break
up
the
synchrony
and
request
processing
and
yield
the
threads
whenever
they
would
black
and
with
that
we'd
be
able
to
drastically
reduce
the
number
of
threads
that
we
need
and
also
be
able
to
share
those
threads
with
other
background
work
such
as
object,
errs,
completions
and
timers
and
stuff,
along
with
a
lot
of
the
other
background.
Work
that
our
do
W
does
itself
like,
GC
and
and
other
stuff.
That.
A
H
H
All
the
time
would
require
sort
of
the
more
pervasive
change
to
the
API
to
make
it
more
useful
and
it's
sort
of
something
that
certificates
it's
something
that
works
is
that
you
can
easily
view
if
you're
the
the
server
and
you
control
the
whole
application,
and
it's
probably
worthwhile
as
a
as
an
allowed
interface
that
clients
could
use
if
they
wanted
to,
but
for
but
for
something.
That's
we're
basically
trying
to
make
a
bit
of
a
general
use.
Client
library,
it's
it's
harder
to
it's
hard
to
dimension
that
way.
Well,.
G
That's
interesting
sort
of
I
have
to
draw
for
us,
I
mean
my
goal
was
certainly
to
accomplish
that,
to
a
larger
extent,
then,
and
less
and
less
about
evolving,
liberate
us
dirty
so
I'm
hopeful
that
once
we
get
it
and
I've
that
you
know
that
gets
it
the
case
in
some
ways.
In
that
direction
we
can
do
we
can
buy
a
profiling.
Other
things
try
to
try
to
evolve
towards
what
what
performs
best.
H
I
mean
this
is
a
completely
new
API.
It's
just
that
kind
of
thread
per
core
thing
is
like
a
very
large
change
and
I'm
with
that
sort
of
a
compelling
without
sort
of
a
compelling
push
in
that
direction.
Where
we
can
do
sort
of
a
neat
natural
partitioning,
it
didn't
seem
worthwhile
to
try
to
develop
that
particular
that
particularly
design
until
we
had
one
it's
something
that
we
can
certainly
add
very
easily.
On
top
of
what
we
have
yeah.
I
G
Think
that
really
is
too
early
as
far
as
gateway
and
its
use
of
these
services
to
take
that
design
competed
against,
for
example,
using
proxy
gener.
Something
like
something
else:
we
don't
we
don't
we
don't
we
don't
want
to
fight,
we
don't
want
it.
We
don't
want
to
use
these
in
the
longer
run.
We
don't
want
to
use
this
interface
it
the
same
way.
We
were
using.
G
H
Present
we
can
certainly
do
a
Casey
is
talking
about
simply
give
object
or
like
at
present
it's
difficult
to
avoid
since
object
or
initiates.
Events
of
its
own,
like
take
some
whatnot,
it's
difficult
to
tell
object,
ER,
which
isle
context
to
use
for
every
operation,
because
some
of
the
things
are
going
to
be
internal.
H
We
can
search,
but
nothing
stops
us
from
having
one
object
fork,
or
we
can
certainly
look
at
trying
to
replumb
object
or
in
a
such
a
way
that
we
can
give
it
multiple,
I/o
context
and
sort
of
direct
it
in
that
way
like.
If
we
have
an
internal
starting,
we
want
to
enforce
across
cores.
That's
something
we
can
add.
A
As
you
guys
are,
looking
at
this
I'd
be
curious,
I
am
I,
went
back
and
I
was
just
browsing.
Some
threads
I'd
read
earlier
about
people
talking
about
the
reactor
and
in
ASIO
and
one
of
the
things
that
I
remember
coming
up
was
someone
had
done
some
profiling
on
their
code
and
saw
that
a
large
portion
of
the
instructions
were
spent
on
locking
and
unlocking
by
the
I/o
service
poll
function.
I
I
On
top
of
that,
I
mean
you:
can
you
can
effectively
only
suspend
once
and
then
resume
later
when
the
completion
comes
back,
so
we
have
one
thing
in
flight
at
a
time
and
that
cool
optimization
of
strands
and
a
Zeo
is
that
they
have
a
recycling
allocator.
So
any
time
that
you
do
in
a
synchronous
operation
that
needs
to
allocate
something.
I
can
basically
just
keep
recycling
memory
because
it
knows
it
only
has
to
allocate
one
thing
and
then
give
it
back
and
allocate
the
next
thing.
I
And
I
mean
that
gives
us
kind
of
a
an
incremental
approach
to
take
in
our
GW.
Things
are
synchronous.
Now
we
can
kind
of
add
coroutine,
suspend
and
resume
along
the
way
and
eventually
break
it
up
into
more
asynchronous
pieces,
and
then
once
we
get
there,
we
can
do
everything
async
and
reduce
the
number
of
threads.
I
A
A
Have
you
have
you
done
any
kind
of
you
know
kind
of
real
early
on
benchmarking,
or
do
you
have
any
kind
of
idea
of
whether
or
not
anything
seems
faster
or
slower.
H
Well,
the
small
thing,
a
couple
of
the
small
micro
benchmarks,
I've
tried,
just
sort
of
in
a
hacky
way
seemed
a
little
bit
faster.
It
might
be
because
we
were
spending
less
time
of
the
allocator
just
up
and
down
the
call
path.
I
think
not
having
to
like
I
said,
do
quite
as
much
allocation
of
right
up
top
helps
us
a
bit.
G
H
G
That's
looking
good
area
for
a
further
four
suggestions:
I
mean,
oh,
if
you,
if
you
mark
and
write
a
slot
by
the
people
that
are
just
in
micro,
benchmarking
of
the
api's
have
some
thoughts
as
to
how
we
how
we
ought
to
activate
it
or
mobile
teams.
Giving
up
that
open
air
in
that
area.
That
could
be
useful
sounds
like
now.
I've
been
interested
in
too
so,
as
we
were
discussing
last
week,
instead
of
whole
program
for
a
following
of
the
rgw
process,
so
think
that's
DPS
that
our
team
wants
to
do.
G
G
A
H
A
Think
overall,
we've
got
some
idea
of
kind
of
what
skeletons
are
you
know
in
the
closet
and
what
what
things
are
not
great
on
the
client
side,
every
time
I
I've
looked
at
it,
I've
been
surprised
by
you
know
things
and
I,
probably
forget
them
and
probably
surprised
again.
So
yeah
it'd
be
good
to
probably
look
at
it.
Yep
clients.
D
A
Yeah
the
only
thing,
the
client
side-
that
I
really
remember
is
how
crazy
are
we
d'caché
as
how
much
of
an
impact
that
ends
up
having
on
fast
devices,
because
it
was,
it
was
kind
of
dominating
everything
else
when
it's
enabled
in
a
in
a
bad
way,
be
otherwise
I.
Don't
actually
remember
what
exactly
we
saw
on
the
client-side,
maybe
some
walking
okay
well,
I
have
to
drop
all
right.
Well,
let's,
let's
wrap
it
up,
then
good
good
work,
guys
very
interesting
project.