►
From YouTube: Scalability Team Demo - 2021-04-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
bob
you
asked
me
where
we
are
with
the
back
objects
cache
we,
I
have
to
think
we
turned
it
on
tuesday,
I
guess
tuesday,
wednesday,
depending
on
what
time
zone
you're
in
and
that
went
well.
But
I
I
came
in
in
the
morning
and
I
looked
at
the
graphs
and
something
looked
very
wrong
to
me
like
I
thought
this
is
not
supposed
to
happen
so
now
that
we
have
more
traffic
on
the
thing
I
realize
I
found
a
bug
and
there's
one
server
where
this
bug
is
particularly
noticeable.
A
A
A
Doesn't
work,
but
the
good
thing
is:
is
that
it's
it's
like
one
guard
statement
somewhere
like
it?
The
the
fix
is
very
small
and
it
was
at
one
point:
the
code
had
the
property,
but
I
never
wrote
the
test
that
asserted
the
property
and
I
was
refactoring
and
trying
to
make
everything
simpler,
and
then
I
ended
up
with
something
that
has
the
same
effect
from
a
user
perspective,
but
not
from
a
system
perspective.
A
So
the
problem
is
that
the
general
problem
is
back
pressure.
This
is
the
safety
feature,
and
that
is
that
we
don't
do
work
that
the
the
user
is
not
consuming,
because
otherwise
a
user
can
do
a
cheap
request,
and
then
we
do
a
lot
of
work
and
we
we
burn
up
the
server.
This
is
just
a
general.
A
This
is
a
very
important
principle
and
I
think
this
is
one
of
the
reasons
why
we
can
even
host
all
this
clone
traffic
is
that
people
can
download
it
fast
enough,
because
if
they
could
download
it
fast
enough,
these
upload
back
processes
and
related
stuff
would
be
very
bad
for
your
gitly
server.
A
So
no
yeah
the
system
doesn't
run
well
without
back
pressure.
So
the
bug
is
that
if-
and
this
is
it's
really
silly,
when
you
think
about
it,
the
bug
is
that
if
a
user
hangs
up
in
the
middle
of
a
request,
the
system,
the
cache,
doesn't
feel
back
pressure
anymore.
So
it's
just
like
whoo.
I
can
write
all
the
bytes
I
want
and
then,
when
it's
done,
writing
all
the
bytes
at
once
it
tries
to
close
the
file
and
then
it
gets
an
error.
Saying
yeah,
you
know
what
the
user
went
away.
A
A
Oh,
no,
that's
not
how
it
that
part
of
the
cache
does
work.
So
if,
if
one
user
comes
in
requests,
something
that
creates
a
cache
entry
and
then
if
a
second
user
comes
in
and
wants
the
same
thing,
while
the
thing
is
still
busy,
they
become
attached
to
the
same
cache
entry
and
then,
if
one
of
them
leaves
nothing
bad
happens.
A
So
it's
a
reference
count.
So
only
when
the
number
of
users
interested
in
a
cache
entry
drops
to
zero,
do
we
say
it's
bad
and
we
don't
want
it
anymore
and,
and
that
part
is
working
fine.
But
what
I
found
is
that,
on
file
of
nine,
there
is
a
repo
of
one
point:
seven
gigabytes
and
for
some
reason
well,
it's
it's
a.
It
gets
cloned
frequently
and
some
users
just
hang
up
after
a
couple
of
megabytes
and
then
the
cache
goes.
I've
got
a
gigabyte
and
a
half
more
of
data.
A
A
This
is
not
supposed
to
be
possible.
Then
I
had
to
find
the
bug
find
the
bug
and
going
back
to
this
particular
server.
If
you
look
at
the
aptx
graph,
you
see
that
it
took
a
hit
the
moment
the
cache
got
turned
on
because
of
this
phenomenon,
because
we're
writing
all
these
bytes
as
fast
as
possible
that
nobody's
going
to
read
because
the
clients
hang
up.
A
But
it's
it's
not
it's
just
below.
I
mean
just
above
the
threshold
where
it
triggers
an
incident
and
but
if
something
else
happens,
then
it
dips.
So
we
had
one
incident
on
this
server
because
it
this
happened,
but
that
abdex
line
is
like
a
noticeable
amount
lower
than
it
should
be.
That's
yeah!
So
that's
roughly
what
happened
but
then
yeah.
A
I
guess
I
was
hoping
to
talk
to
matt
about
that
last
night,
but
then
we
were
in
an
incident
about
the
security
and
related
security
thing
and
as
I
I
haven't,
even
I
just
completed
the
work.
I
don't
even
know
what
the
current
status
is
of
509
or
what
we
do
with
the
cash.
C
C
A
I
I
was
looking
yesterday
and
things
looked
a
lot
better
on
canary
one,
but
I
now
think
that
all
instances
are
suffering
from
this
safety
mechanism,
not
working,
but
let
me
just
quickly
pull
up
canary
one
and
yeah.
It's
it's
better.
It's
still
not
amazing.
A
A
C
And
the
the
status
of
the
bug,
I
know
that
you
submitted
it
in
italy.
Is
it
like
blocked
somewhere
or
emerged.
A
B
A
C
A
A
So
that
means
that
yeah
at
the
moment-
and
it
first
needs
to
get
merged
into
italy.
I
don't
know
how
the
part
of
the
process
works
where
we
take
italy
commits
and
integrate
them
into
the
main
repo.
I
don't
know
how
often
that.
C
A
C
A
C
Okay,
that
sounds
good
is.
Is
it
dangerous
to
keep
the
feature
on,
or
are
you
confident
it's
fine
to
keep
it
on
this
way.
C
A
It
looks
like
it's
okay,
but
on
that
one
server
like
if
we
start
having
more
incidents,
then
I
think
we
need
to
at
least
turn
it
off
there
and
in
general
it
seems
like
we
can.
We
we're
fine,
like
the
infrastructure
can
handle
it,
but
it's
a
safety
feature.
That's
there
for
a
reason
that
is
not
working
right
now,
so
it's
it'd
be
better
to
have
it
on
yeah,
okay,.
C
So
I
just
checked
once
an
hour:
we
update
okay
components.
A
So
like
that
today
or
tomorrow,
it's
it's,
it
should
catch
a
nice
deployment
pipeline.