►
From YouTube: 2023-01-12 Scalability Team Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
It
has
to
do
with
uploading
and
object,
object,
storage,
which
is
something
we
spent
time
on
about
a
year
ago,
from
from
the
side
of
security
Frameworks,
and
it's
sort
of
lucky
that
this
problem
is
now
being
picked
up
and
we
can
help
we
can
help
out
a
bit
or
I'm
trying
to
help
out.
A
So
the
problem
is
that
the
way
uploads
work
in
gitlab
is
weird
and
the
specific
problem
is
that
we
use.
We
use
a
gem
that
cares
about
the
file
name
of
your
uploads.
So
when
a
user
uploads
a
thing
we
it
wants
to
store
the
thing
with
the
correct,
correct
file.
Name
and
the
problem.
Is
you
when
the
upload
starts?
A
You
don't
know
yet
what
the
file
name
is
supposed
to
be,
because
if
you
think
of
CI
artifacts,
they
all
have
their
own
database
role
and
that
row
gets
created
when
the
request
is
done
or
towards
the
end
of
the
request.
But
the
upload
comes
in
at
the
start
of
the
request.
A
So
what
happens
in
our
current
setup
is
that
we
write
the
uploads
somewhere
in
object,
storage
under
a
temporary
name
that
we
know
to
be
wrong
and
then,
during
the
active
record
callbacks,
when
we
save
the
the
row
for
the
the
artifact
of
loads,
we
get
a
database
ID
and
then
we
know
the
actual
name
and
because
the
way
the
thing
is
written,
it
wants
the
database
ID
to
be
part
of
the
name
Sony.
Then
we
know
the
final
name
of
the
uploaded
artifact
and
we
need
to
perform
either
a
move
or
a
copy.
A
A
I
think
it's
we're
already
at
the
point
where
it
happens
outside
the
active
record
called
X,
because
when
this
happens
in
active
record
callbacks,
that
would
cause
database
contention
because
the
callbacks
run
in
the
database
transaction.
A
So
then
we
would
have
a
long
running
transaction
and
this
already
happens
outside
a
transaction
which
is
something
but
but
it
still
happens
during
a
puma
request,
which
also
has
a
timeout
and
the
problem.
Is
it
shouldn't
be
happening
in
the
first
place,
so
that
merch
request
is
from
somebody
who's
going
to
try
and
solve
this
from
the
CI
side
and
that's
exciting?
Because
it's
a
problem
I
want
to
see
solved,
but
I
had
trouble,
making
it
clear
how
badly
it
needed
to
be
solved.
A
And
now
the
people
from
CI
have
decided
that
they
think
this
needs
to
be
solved
and
I
get
to
help
out
a
bit.
A
C
A
C
A
Storage
blob,
where
the
actual
bytes
are
yeah
awesome,
so
the
key
is
really
to
choose
the
name
early
and
not
late,
and
the
simplest
way
to
choose.
The
name
is
just
a
random
name
and
accept
that
is
not
a
pretty
name,
but
we
have
to.
We
have
to
create
choose
a
name
when
we
start
storing
the
files
and
another
technical
bit
that
was
open,
but
I
think
is
going
is
more
or
less
settled
now
is
that
these
temporary
uploads
will
get
stored
in
Reddit
these
names.
A
So
that's.
If,
first
of
all,
you
need
to
remember
that
this
thing
exists
in
between
requests,
but
also,
if
it
fails,
you
need
to
delete
it.
So
we're
going
to
have
a
Reddit
data
structure
that
contains
all
the
in
progress
uploads
and
we
periodically
iterate
through
that
whole
thing
and
delete
they'll
have
timestamps
and
anything
that
is
older
than
an
hour
or
something
we
delete
from
object.
Storage,
because.
A
To
hear
it
makes
sense
that
we're
Pursuit,
that
we
persisted
yes,
but
the
good
thing
is
that
the
we're
only
storing
the
name
of
a
blob
we're
not
storing
the
contents
of
a
blob
and
the
the
num,
it's
probably
going
to
be
a
Reddit
hash.
The
number
of
elements
in
this
hash
is
the
number
of
in
uploads
that
are
in
flight.
So
that
is
a
that's
a
limited
number.
It
is
not
the
number
of
uploads
ever.
A
A
A
Is
nerdy
about
it?
Yeah
we
haven't,
we
haven't
built
this
yet
or
Eric
hasn't
built
this
yet.
But
what
I
think
we'll
do
is
to
have
a
redis
hash,
because
the
most
common
cases
is
that
you
say
this
blob
is
being
uploaded
and
when
it
refinalize
IT,
you
delete
it
yeah.
So
adding
and
deleting
from
this
data
structure
is
the
most
common
operation.
A
And
that
is
not
you
don't
want
to
do
a
ton
of
simultaneous
age
scans,
but
we
would
use
Google
currency
control
to
do
only
one.
So
that
will
be
fine
and
it's
not
going
to
be
a
huge.
It's
not
going
to
be
a
huge
hash,
because
what
is
the
number
of
simultaneous
uploads?
You
reasonably
expect
yeah.
A
D
A
And
the
other
thing
that
this
will
defend,
which
should
eventually,
but
we
don't
have
to
get
implemented-
that
we
can
also
Implement
eventually,
is
to
keep
a
redis
data
structure
when
we
of
pending
the
leads.
A
So
we
don't
already
do
this,
but
one
of
the
problems
I,
try
to
solve
or
think
of
a
solution
for
earlier
on
was
to
track
objects
being
created
and
objects
about
to
be
deleted
because
foreign
I
think
maybe
we
still
do
send
delete
request
to
object
storage
during
a
SQL
transaction.
So
that's
bad
and
what
you
really
want
is
if
you
delete
the
artifacts
Row
from
the
database,
you
want
to
just
say:
oh,
you
should
all
something.
C
A
Well,
yeah!
Well,
this
is
where
I,
where
I
got
involved
and
started,
trying
to
nudge
a
little
bit,
because
I
want
this
to
be
generic
and
it's
going
to
live.
We
have
a
generic
class
that
does
all
this
crazy
object.
Storage,
copying
stuff
that
I
was
talking
about
in
the
beginning
that
is
defined
in
one
class,
so
in
that
class
we
would
also
Define
this.
A
The
only
problem
is
that,
in
order
to
use
the
this
new
Behavior,
you
need
to
have
a
database
column
that
you
can
store
the
random
name
in,
so
it
would
be
an
opt-in
Behavior.
So
over
time
we
would
have
to
go
through
all
the
different
types
of
uploaded
artifacts
and
create
a
database
column.
To
say
this
is
the
actual
name.
A
Well,
that
was
the
design
I
had
in
mind
earlier,
but
you
get
a
lot
of
churn
on
that
table
and
I
like
that.
This
approach
focuses
on
the
the
most
important
part
of
the
problem,
which
is
I,
think
the
copy
and
and
I
think
redis
is
a
suitable
tool
for
this,
and
you
could
it.
This
is
a
more
iterative
approach,
my
idea
of
having
a
database
table
where
we
track
all
blobs
and
we
do
all
these
things
is
well.
A
A
This
was
very
abstract,
with
absolutely
no
pictures
or
code
to
look
at,
and
some
people
still
know
what
I'm
talking
about,
but
yeah
thanks
for
listening
and
this
is
yeah.
This
is
still
ongoing.
C
C
Are
you
like
providing
them
with
like
kind
of
observability
knowledge
and
that
sort
of
side
of
things
as
well
like
saying
like
hey?
This
is
what
all
like
helping
them
out
with
the
data
and
and
what's
happening
in
production
or
just
on
the
on
there.
A
You
mean
to
understand
the
scope
of
the
problem.
They
need
to
solve.
Yeah
I
think
they
they
haven't
asked
they
figured
out
on
their
own
that
they
want
to
solve
through
another
way
that
they
want
to
solve
this
problem.
I
I
just
got
roped
in
because
people
were
talking
about
the
code
side
like
how.
D
A
C
I
was
just
sort
of
thinking
that
it's
that
you
know
one
of
the
things
that
scalability
does
is
kind
of
convince
people
through
the
data
and
everything.
But
it
sounds
like
they
convinced
themselves.
I.
A
C
A
Yeah
and
it's
a
long-standing
problem,
we
actually
monkey
patch
the
cloud
storage
gem
other
even
we
monkey
patch
something
to
make
a
server-side
copy,
API
call
in
the
first
place,
because
otherwise
it
would
be
a
copy
where
we
download
the
data
from
one
place
and
upload
it
again
to
another
place.
A
A
A
C
Would
be
really
interesting
to
know
what
percentage
of
Puma
time
on
the
cluster
or
maybe
on
the
API
is
because
I
mean
that
CIA
artifacts
is
a
lot
of
time
right.
It's.
A
Not
just
the
artifacts
I
actually
know
that
David
Fernandez
from
package
was
doing
some
queries
where
he
was
looking
at
the
logs,
where
we
have
external
HTTP
metrics,
one
Min
added
those
one
or
two
years
ago,
and
he
was,
and
so
David
was
saying.
Well,
if
I
have
a
request
that
is
being
handled
by
package
and
it's
making
an
external
HTTP
call,
then
there's
almost
certainly
an
object,
storage,
uploads
and
he
came
up
with
some
staggering
amount
of
time
spent
doing
external
hdp
equals
and
those
are
properly
probably
mostly
the
copies
yeah.
A
So
it's
going
to
yeah
it's
going
to
save
a
lot
of
CPU
time,
probably
on
on
the
API
Fleet
and.
A
I,
don't
think
that's
part
of
this
work
and
I
again,
I
think
this.
That's
why
this
work
is
likely
to
succeed
because
it
doesn't
try
to
solve
all
these
problems
at
the
same
time
as
much
as
to
my
mind,
they're
all
part
of
the
same
design
problem,
but
pragmatically
it's
better
to
treat
them
as
separate
problems.
A
I
think
we
could
already.
We
could
fix
that
in
parallel,
like
completely
independently
by
doing
yet
another
override
of
carrier,
wave,
behavior
and
saying,
if
Gary
away
thinks
it's
deleting
a
file,
we
instead
schedule.
B
A
We
schedule
yeah
either
schedule
a
worker
or
I
may
be
scheduling.
A
worker
is
good
enough.
Now
that
I
think
about
it,
but
do
something
that
triggers
the
deletes
and
don't
don't
actually
do
it.
Synchronously
yeah.
B
C
B
No
it's
when
they,
when
you
click
a
delete
somewhere
in
the
application.
A
whole
bunch
of
stuff
needs
to
happen,
including
deleting
objects
from
artifacts
and
so
on.
What
they
do
is
they
create
the
they
create
rows
in
a
table
saying
these
are
all
the
things
we
need
to
delete,
but
now
they
don't
count
towards
your
storage
or
all
of
these
things
anymore,
and
then
we
just
work
through
them.
Yeah,
it's
a
it's
another
hack
on
carry
Rave
where
carry
Rave,
wants
an
object
to
be
attached
to
like
an
active
record
yeah.
A
So,
but
this
this
is
not
for
avoiding
doing
the
delete
during
the
SQL
transaction.
There's
another
problem
here
that
Andrew
is
may
be
thinking
of,
which
is
that
I
was
actually
what
you
just
talked
about.
Bob.
A
If
you
do
a
delete
on
one
thing,
then
there
is
a
thousand
dependent
things
that
also
need
to
be
deleted
and
the
way
rails
works
is
that
it
would
load
those
as
one
row
at
a
time
and
run
the
delete
callbacks,
and
it
sounds
like
what
this
is
doing
is
to
make
sure
that
we
don't
do
all
that
work
during
a
web
request.
So
we
just
make
the
delete
button
faster.
C
A
Kind
of
different
yeah,
so
it's
a
different
problem,
but
it's
it's
related,
and
it's
also
yeah
also
know
that
it's
on
David's
mind
because
package
also
has
this
problem,
because
some
package
features
create
lots
of
dependent
records.
So
when
people
delete
the
package,
then
a
whole
bunch
of
stuff
has
to
be
deleted.
As
a
consequence,
I
think
if
we
can
get
to
a
point
where
every.
A
No,
actually,
you
don't
even
need
that,
but
what
what
you
need
is
a
way
to
say:
here's
a
database
records,
here's
the
path
to
an
object
that
belongs
to
it
and
I
know
it
needs
to
be
deleted
and
I
store
that
path
somewhere
and
now
I
just
don't
run
the
carrier
wave
callbacks
for
this
object,
but
that
doesn't
solve
the
problem
that
you
would
have
to
instantiate
lots
of
also
brackets
from
the
database.
A
B
And
it's
not
like
the
problem
that
Andrew
was
talking
about.
Is
the
cascading
deletes
in
the
database
itself,
where
we
have
different
foreign,
Keys,
yeah,
yeah
and
first
foreign
keys
and
then.
A
B
D
D
It
feasible
to
do
so
at
this
stage
and
the
reason
I
ask
is
I
guess
to
Andrew's
Point
earlier
I
think
at
the
beginning
of
the
demo
here,
because
you
said
we
got
quite
lucky
that
something
that
we
worked
on
some
while
ago
is
now
going
to
be
useful
and
be
used
and
I
think
you're,
probably
90
right,
but
I
think
there
also
was
some
foresight
right
that
we
knew
that
it
was
an
area
that
needed
some
investment
like
object,
storage
in
in
general,
and
but
we
didn't
necessarily
know,
maybe
how
to
prove
that,
how
to
prove
the
value
of
it
or
how
to
at
least
show
the
value
at
the
time
to
kind
of
get
that
work
prioritized.
E
E
A
D
A
It's
a
performance
bottleneck,
and
my
guess
would
be
that
it
probably
there's
a
customer
who
does
lots
of
large
uploads,
who
sees
lots
of
their
large
uploads,
feel
who
somehow,
through
their
size,
has
enough
clouds.
A
That's
one
of
the
ways.
These
way
these
things
float
up
right,
there's
always
something
that
doesn't
work
right,
that
users
don't
like,
but
how
loud
is
the
voice
of
the
user,
so
it
could
be
that
a
user
with
a
particularly
loud
voice
is
saying.
Please
fix
this,
that's
guessing,
but
that's
what
considering
it's
on
the
product
side.
I
expect
something
like
that.
C
D
B
A
Mean,
but
in
all
seriousness,
this
is
a
problem
that
should
be
visible
in
their
error
budgets
and
there
we
don't
people
already
care
about
problems
that
float
up
in
their
error
budgets.
C
A
Well,
but
then
it
also
depends
well
whether
it's
cheating
or
not
also
depends
on
who
uses
it
or
how
it
gets
used
and
if
something
gets
used
a
lot
or
gets
used
by
users
with
a
loud
voice
and
it
doesn't
work.
Well,
then
cheating
doesn't
help.
E
D
D
Like
it's,
it's
also
not
just
latency
or
performance,
focused
right
in
terms
of
assumingly
there'll,
be
some
cost
benefit
off
the
back
of
this,
which
I
know
we're
not
there
yet,
probably
in
terms
of
like
really
using
that
as
a
as
an
input
into
Stage
groups,
but
it
it
just
feels
like
this
is
an
interest
in
use
case
to
think
about,
like
what
can
we
do
to
try
and
influence
these
decisions
to
be
made
earlier,
like
what
data
inputs
can
we
give
to
the
teams
and
because,
as
you
say,
I
think,
naturally,
what
happens
is
a
big
customer
shouts
allowed
on
anyone
else,
and
it
naturally
moves
to
the
top
of
the
backlog.
D
It
rarely
do
to
these
things,
just
like,
naturally
make
their
their
way
up,
and
it's
kind
of
what?
What
leverage
and
what
insights
do?
We
have
to
try
and
get
this
type
of
work
prioritized
sooner,
which
is
kind
of
what
we
were
trying
to
do
with
object
storage
improvements
nine
months
ago,
but
we
just
didn't
really
know
how
to
I
guess
how
to
prove
the
benefit
or
show
the
benefit
to
some
of
the
stage
groups.
A
As
your
first
question,
Liam
I
think
that
latency
really
goes
a
long
way
because
latency
correlates
with
user
happiness,
but
it
also
correlates
with
CPU
spent.
So
it's
a
proxy
for
both
those
things.
So
no
nobody
should
be
burning
up
a
lot
of
latency.
B
A
D
That
that's
again
back
to
Andrew's
point
right.
You
sort
of
accept
the
status
quoi,
because
each
team
has
a
ton
of
priorities
right
and
if
it's
kind
of
good
enough
today,
but
where
there
are
areas
where
we
believe
that
there
is
significant
room
for
improvement
is.
But
how
do
we
I
can
shine
more
of
the
light
on
on
those
things.
A
Yeah
it's
it's
also.
Sometimes
I
mean
this.
This
is
a
known
problem.
Everybody
knows
that
the
way
this
works
now
is
not
great,
but
it's
it's
not
always
clear
that
things
can
be
improved
in
the
first
place.
You
I
think
it
depends
on
it's
Case
by
case,
like
is
the
is
there?
A
Is
there
a
more
efficient
way
to
do
this
thing
right
here
here
we
can
be
more
efficient
by
just
not
doing
something,
but
just
from
seeing
that
something
spends
certain
requests
are
slow,
doesn't
tell
you
that,
there's
the
opportunity
you
have
to
go
in
and
look
that
could
be
our
job
to
see,
to
see
things
and
look
and
say
or
well,
or
you
can
say
it's
the
job
of
the
team
that
owns
the
code
to
look
at
it.
So
we
could
go
around
saying
to
people
hey.
A
But
maybe
they
they
say
we
looked
and
we
don't
see
a
way
and
then
you
could
argue
with
that
because
they
don't.
But
that
doesn't
mean
there
isn't
a
way.
D
A
Yeah,
but
the
motivation
here
is
not
even
the
multiplier.
It's
just
that
there's
one
feature
alone
is
important
and
is
one
of
the
heaviest
users,
but
probably
the
heaviest
user
of
electric
storage
is
GitHub
is
registry,
but
Registries
isolated
application,
and
we
don't
seem
to
have
to
help
them
much,
but.
D
A
I,
what
I
I
think
the
way
this
is
planning
playing
out
is
is
good
because
we
are
we
or
the
team
that
owns
CI
is
or
that
owns
these
artifact
uploads
is
there's,
there's
a
real
there's,
a
more
real
need
to
motivating
this
work.
A
B
A
B
Found
I
found
hard
to
convince
people
of
like,
for
example,
what
I
was
recently
looking
into
together
with
Hercules
a
little
bit
was
slow,
get
endpoints
info
reps
and
internal
allowed
and
I,
as
a
user
of
gitlab
noticed
that
these
things
are
slow
and
that
competitors
it
faster
and
I
think
they
should
be
faster,
and
the
error
budgets
also
show
that
the
thresholds
are
one
second
or
five
seconds
in
some
cases,
but
that's
not
in
like
so
now,
I'm
we're
going
to
end
up
creating
issues
for
those
endpoints
like.
B
Let's
see
how,
let's,
let's
make
them
faster
and
set
higher
urgencies
on
them.
But
what's
what's
the
carrot?
Why
would
they
because
I
complained
that
the
endpoint's
slow
and
might
get
pushes
slow?
Well,
I
complained
about
that
a
lot
already,
but
apparently
there's
no
customers
complaining
about
that.
There's!
No
customers
complaining
enough
about
things
that
we.
C
Know
this
so
just
kind
of
on
that
point.
One
of
the
things
that
will
be
quite
interesting
is
that
with
dedicated
we're
gonna
have
like
much
better
metrics
and
insights
into
what's
making
customer
you
know
at
the
moment
we
will
often
customers
say:
oh
this
thing's
slow
or
that
slow.
We
don't
have.
C
You
know
the
same
kind
of
metrics
as
we
have
on
gitlab.com
or
there,
and
we
certainly
don't
have
logs
or
anything
like
that
in
most
cases
and
it's
very
difficult
to
actually
diagnose
what
the
problem
is
for
these
customers
and
now
it's
dedicated
we'll
have
sort
of
a
insight
into
what
a
lot
of
self-managed
customers
are
experiencing
and
so
we'll
be
able
to
use
that
as
a
proxy
and
say
like
hey.
C
You
know
this
is
a
directly
affecting
this
customer
and
by
definition,
they're
all
big
customers
and
so
there'll
be
some
weight
behind
that.
But
I'm
really
excited
about
the
observability
part
of
of
dedicated
and
seeing
other
customers.
Not
you
know.
Gitlab.Com
is
one
case,
but
there's
a
lot
of
other
cases
that
self-managed
customers,
you
know,
hits
that
we
don't
necessarily
see
on
kitlab.com,
which
we'll
see
with
with
dedicated
and
observability.
We
got
there.
C
Especially
because
the
the
instant
sizes
are
kind
of
fixed,
it's
not
like
gitlab.com,
where
we
just
you
know,
scale
infinitely.
You
know
the
you
know
a
reference
architecture
is
a
reference
architecture
and
so
for
customers
using
you
know
all
the
CPU
to
to
do
their
uploads.
That's
going
to
quickly
become
limited
and
they're
going
to
say.
Oh,
this
is
really
slow
and
then
we'll
investigate
and
go
well.
C
It's
because
the
artifacts
endpoint
is,
you
know
doing
these
silly
things
and
so
I
think
there's
going
to
be
a
whole
new
sort
of
line
of
incredib
issues
that
come
out
of
dedicated
at
some
point
and
and
it
will
benefit
like
particularly
self-managed,
a
lot
other
self-managed
customers,
hopefully,
and
us
on
gitlab,
sauce
or
gitlab.com.
F
Yeah
I
I,
agree,
I,
think
the
scaling
constraints
are
very
different
on
dedicated,
but
but
that
and
the
workload
is
very
different.
It's
it's
a
completely
different
scene.
I
want
to
I
want
to
call
it
a
sample
because
github.com
is
such
a
such
a
such
a
diverse
set
of
of
use,
cases
and
workloads,
and
it's
very
easy
for
for.
F
Exactly
like,
we
have
this,
like
multiple
years
of
terrible
Road
level,
lock
contention
that
that
only
occasionally
gets
attention,
but
has
been
present
for
a
very
long
time,
just
just
as
kind
of
a
recent
top
of
Mind
example.
F
A
C
C
Good
it
I
mean
it
would
be
a
very
good
thing
to
raise
up
and
and.
B
C
You
know
not
thinking
of
of
gitlab.com
and
gitlab
dedicated
as
separate
places,
and
you
know
they're
all
gitlab,
sauce
they're,
just
different
things
of
that,
and
so
one
of
the
things
that
that
ties
into
is
like
scalability
working
on
those
gitlab
dedicated
issues
as
well,
and
so
that
ties
in
directly
with,
like
you
know,
gitlab
dedicated,
should
have
access
to
those
metrics
yeah
logs
is
a
little
bit
more
tricky.
You
know
for
everyone,
but
but
you
know
certainly
the
the
metrics
are
going
to
be.
C
Yeah
I
mean
the
I
mean
even
support.
People
are
not
ever
sorry,
that's
that's
kind
of
the
wrong
way
to
say
it,
but
support
people
have
got
access
to
the
logs,
and
you
know
at
the
moment:
it's
it's
really
poorly
done.
You
know
on
and
that's
on
dedicated
and
you
know
because
we
haven't
had
time,
but
it
is
a
shared
login
right
and
so
there's
a
read-only
Cabana
login,
which
needs
to
be
fixed
Pronto
but
there's
stuff
with
OCTA.
That
needs
to
be
done
in
order
to
get
that
right,
complication
but
yeah.
C
So,
but
at
the
end
of
it,
like
yeah
I
mean
technically
our
logs
are
only
supposed
to
have.
You
know
we
we're
supposed
to
they're
supposed
to
be
fairly
secure,
but
obviously
you
can
still
see
project
names
which
some
people
get
more
upset
about
than
others
right
and
usernames
and
yeah,
but
but
but
yeah
I.
Think
there's
like.
Certainly
if.
C
You
know
the
scalability
should
be
the
first
team
to
get
in
there
and
be
diagnosing
those
things,
and
you
need
logs
for
that.
A
Yeah
I
was
going
to
say,
like
whoever
is
working
on
this,
that
they
will
need
logs,
looks
are
and
I
understand,
but
I
was
curious.
What
the
the
balance
was
here,
because
I
understand
that
the
expectations
of
the
customers
are
different
in
the
case
of
dedicated
when
it's
or
they
might
be
different
when
it
comes
to
their
logs
yeah.
C
I
mean
yeah
so
far
you
know
South
Africa
has
got
access
to
the
logs,
so
they
can't
be
that
strict
right,
the
yeah
I'm
sure
that
it'll
I
I
don't
know
if
it'll
change
and
it
it
very
likely
will
but
obviously
then
fed
ramps
a
whole
different
kettle
of.
A
A
A
C
A
C
Yeah
also,
the
the
difference
is
right,
like
we
we're
using
kibana
in
both
cases,
and
we
know
what
the
fields
are
and
everything
where,
like
when
I've
been
talking
to
exactly
this
problem,
yeah.
A
C
A
Stephanie
and
I
were
looking
at
one
customer
escalation
where
I
ended
up
taking
Json
logs
and
ingesting
them
into
sqlites,
because
yeah
I
I
just
had
to
count
some
things
and
there
was
no
kibana
or
anything
like
you
say.
The
field
names
would
have
been
different,
I
I
had
to
go
through
so
much
friction
just
to
do
some
Elementary
law
queries
and
if
we
have
people
at
least
on
the
same
logging
stack
and
everybody
knows
what
queries
to
run.
Yeah.
C
A
C
Just
on
that
I
don't
know
it's
something
that
I've
been
meaning
to
ask
with
some
some
support
people,
but
has
anyone
actually
used
Loki,
CLI
mode,
because
you
know
Loki,
the
log
they've
got
a
mode
where
you
don't
run
it
as
a
server,
but
you
just
run
it
as
a
CLI
across
your
logs
and
you
can
use
the
Loki
ql
query
language
and
do
all
the
things
in
CLI
mode
and
I
think
like.
A
C
Yeah
exactly
yeah
so,
but
but
on
those
on
most
of
the
customers
either
they
don't
have
anything
or
they've
got
some
strange
custom
system
or
you
know
it's
like
yeah.
It's
all
that
doesn't
have
any
analysis.
You
know
can't
roll
anything
up,
but
the
last
thing
I
just
wanted
to
say
as
well
was
that
I
think
one
of
the
things
that's
kind
of
interesting
is
that
some
of
the
customers
that
are
moving
to
dedicated
or
often
what
I
hear
is
like.
Oh
we're
having
these
really
bad
performance
issues.
C
C
F
C
I
I
know
yeah,
it's
a
really.
It's
a
really
interesting
sales
methodology
as
well.
A
I
don't
know,
I
think
it
makes
a
lot
of
sense
that
people
want
dedicated
yeah
but
yeah
we're
going
to
be
we're
going
to
have
to
solve
interesting
problems,
but.
C
Yeah
and
the
the
you
know
the
value
that
the
kind
of,
if
we
get
it
right
and
we
have
the
value
out
of
like
all
the
the
forecast.
You
know
the
the
four
car
the
capacity
planning
on
the
customers,
and
you
know
the
security
side
and
what
they
bring
to
the
table
like.
We
can
really
scale
that
up
right
and
really
kind
of
add
a
lot
of
extra
value
over
just
the
straight,
like
Hey,
we're
running
your
instance.
For
you,
a
gem.
F
Yeah
there
are
some
interesting
constraints
too,
with
dedicated
and
I'm,
not
quite
sure
how
we'll
solve
them
like
like,
for
example,
sometimes
with
with
gitlab.com,
we'll
we'll
identify
identify
an
anti-pattern
Implement
a
feature
flagged
new
behavior
and
roll
that
out
toggling
it
on
and
off
with
feature
Flags.
F
But
the
turnaround
time
for
for
deploying
you
know
an
experimentalworkaround
on.com
is
I
think
generally
going
to
be
much
much
shorter
than
it
would
be
for
dedicated
right,
because
dedicated
I
thought
part
of
the
part
of
the
value
proposition
was
dedicated.
It
had.
Customers
have
dedicated
had
some
input
on
the
Cadence
for
deployments.
C
So
if
you're
patching,
if
you're
back
porting
those
things
they'll
get
them
straight
away,
but
even
then
we
don't
do
daily
deploys.
We
only
do
once
a
week
deployers,
so
it
gets
slow.
It's.
C
C
Right
yeah
and
it's
and
that's
like
a
feature
right
like
that's
something
that
they're
so
yeah
you.
C
F
So
so
it
is
still
possible,
it
would
just
be
it
would.
It
would
just
be
like
measured
in
days
rather
than
rather
than
weeks
or
months.
F
F
F
How
I
I've
not
looked
at
any
of
the
dedicated
systems.
Do
we
so
having
metrics
logs
and
the
capacity
to
run
ad
hoc,
observability
tools
or
or
kind
of
the
three
pillars
of
being
able
to
answer
difficult
questions
about
system
Behavior?
Do
we
do
do
we
does
someone?
Can
we
put
together
a
set
of
people
that
have
all
three
of
those
things.
C
It's
something
I
I
can
raise
an
issue
about
it
if
you
like,
because
it
ties
in
with
the
theme
of
of
you-
know,
One
Source
platform
and
then
the
the
the
the
on
the
logs
I
think.
The
thing
with
the
logs
is
like
probably
on
a
on
a
needs
be
basis
until
we
get
OCTA
and
then
we
can
actually
see
who's
logging
in
as
opposed
to
at
the
moment,
which
is
like,
oh,
the
the
account
login,
which
is
super
sketch
right.
Yes,.
F
Yes
for
sure
yeah
yeah,
like
I'm
kind
of
thinking
of
like
an
escalation
thing
where,
where
maybe
maybe
a
few
of
us,
get
pulled
into
doing
analysis.
If
we
can
just
do
a
screen
share
with
someone,
that's
got
access
to
loads.
That's
sufficient
yeah.
F
Fantastic
and
and
do
members
of
the
dedicated
team
have
like
PC
cool
access.
I
read
a
CLI
access.
Can
we
run
per
BPF
tools.
C
Yes,
I
mean
they
can
get
onto
the
machines
yes
and
yeah.
So
it's
a
little
bit
tricky
because
we
kind
of
go
through
layers
to
get
in
there,
but
ultimately
you
can,
because
we
realized
pretty
quickly
that
it's
very
difficult
to
do
anything
as
an
operator
without
a
break
loss
procedure.
Yeah.
F
C
Yeah,
it's
it's
I
mean
it's,
it's
basically
there's
a
role.
An
I
am
role
that
that's
a
break
loss
and
then
because
we
use
SSM
instance
connect
or
one
of
those
AWS
SSH
like
Technologies,
you
can
once
you're
in
that
role.
You
can
kind
of
jump
on
anywhere
and
do
anything
but
and
it's
all
audited,
which
is
really
nice.
Yes,.
C
E
Yeah
I
didn't
I
was
like
I
guess,
we're
done,
yeah
have
a
good
day,
Matt
yeah.
F
Haven't
kept
up
with,
with
with
the
the
work
that
you're
spearheading.
E
Hopefully
you
start
moving
forward
on
splitting
off
the
repository
cash
I'm,
hoping
we'll
start
moving
the
staging
ones
next
week,
we'll
see
if
and
Mars
have
not
been
merged
yet
so
stay
tuned,
but.
E
E
So
I
suspect
that
as
soon
as
we
start
moving
forward
with
the
redis
repository
cash
I'll
grab,
whichever
backend
engineer
happens
to
be
the
person
I'm
working
with
at
the
time
and
we'll
start
doing
the
work
to
get
the
be
load,
balancing
yeah
started
so
yeah.
Hopefully
we
can
plow
through
both
of
these
relatively
quickly
and
hopefully
at
some
point.
You
will
have
a
end
date
for
redis,
cluster
and
I'll
know
what
I
don't
have
to
do
this
anymore,
yeah.
E
F
That's
a
good
point:
yeah
I
was
thinking
of
it
as
lost
time,
but
I
guess
it
it's
also
time
without
without
the
stumping
under
stress
yeah,.
E
Nash
is
actually
trending
downward,
which
won't
be
the
case,
but
yeah
yeah,
so
we're
just
we're
moving
forward
at
the
speed
that
makes
sense
with
the
pressure,
with
the
knowledge
that,
if
the
pressure
changes
on
the
redis
cache
side,
the
speed
will
have
to
change
so
yeah,
like
I
told
Rachel.
Yesterday
was
you
know,
as
long
as
everything
is
still
fine
we're
going
to
keep
moving
it
roughly
the
speed.
E
But
if
everything
isn't
fine,
we're
going
to
need
a
lot
more
people
and
to
do
it
a
lot
faster,
so
yeah,
that's
where
we're
at
cool
how's
read
this
cluster.
F
F
I
I,
desperately
want
to
have
have
a
kind
of
a
a
project
meeting
to
get
to
kind
of
get
on
the
same
page
about
where,
where
the
risks
are
where
the,
where
the
the
big
questions
are
for
for
sizing,
the
the
the
the
milestones
and
I
have
some
very
definite
opinions
about,
where,
like
some
of
them,
I
have
a
pretty
good
sense
of
how
of
of
this,
of
the
the
the
the
task
involved
and
the
risks
involved
and
roughly
how
long
I
I
think
they
would
take
under
ideal
circumstances.
F
And
then
there
are
two
pieces
where
I
really
don't
have
an
idea
and
I
would
love
to
get
like
I
know.
Some
of
the
folks
on
the
team
are
going
to
have
like
Sylvester
in
particular,
are
going
to
have
some
very
definite
opinions
about
at
least
one
of
those
areas.
It's
a
big
question
mark
for
me
and
I
think
that
means
I
need
to
set
up
a
meeting
for,
like
you
know,
midnight
ish
my
time
because
that'll
have
enough
overlap
for
for
for
Sylvester
and
the
folks
in
Europe.
F
You
know
yeah
early
in
their
day
and
I.
Think
I
need
I.
Think
I
need
that
to
happen
to
to
get
us
in
sync
on
those
those
points.
So
that's
that's
that's
kind
of
top
of
mind
for
me
and
I'm
trying
to
get
along
as
best
I
can
without
without
that,
but
I
feel
like
that's,
going
to
be
kind
of
the
key
to
unlocking
a
reasonable
estimate
and
and
de-risking
the
the
whole
sequence.
F
F
E
To
set
up
a
synchronous,
conversation
you're
complicates
them,
although
I
have
met
with
both
Sylvester
and
Marco
several
times
really
in
my
morning,
so
you
may
be
able
to
find
a
time
that
isn't
midnight
your
time,
but
is
instead
like
7
A.M.
Your
time,
yeah
I
would
prefer
midnight,
given
the
two
options,
but
I
suspect
that
you
may
have
a
different
opinion
so,
like.
F
Yeah
yeah
yeah,
no,
it's
true,
I
I've
not
been
sleeping
well
recently,
so
I've
been
up
at
midnight
and
seven
and.