►
From YouTube: 2020 10 26 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
it
is
october
26-
and
this
is
the
memory
team
weekly
meeting
jump
right
into
team
updates,
so
we've
been
discussing
in
this
issue
here
about
setting
a
goal
for
getting
the
git
lab,
getting
gitlab
down
to
two
gigabytes
or
the
predefined
gcp
instant
size
of
1.8.
A
In
a
week,
it's
a
bit
of
a
misleading
title.
This
is
more
about
having
a
tight
feedback
loop
for
a
week
and
discussing
ideas
and
hacks
and
ways
that
we
can
quickly
get
to
that
footprint
or
determining
that
we
can't
and
maybe
documenting
like
where
this
is
the
most
streamlined
footprint
that
we
can
get.
A
I've
gotten
some
feedback
on
here
from
a
few
folks.
So
I'd
like
to
get
more
feedback
from
the
team
make
sure
you
know
what
week
works
start,
throwing
out
some
ideas
and
we
can
start
creating
issues
so
that
we
have
a
structure
for
this
when
we
actually
take
this
work
on.
So
thanks
for
those
that
have
read
and
contributed
to
it
so
far,
and
I
welcome
more
feedback
before
we
figure
out
which
week
works
best
for
us.
A
A
I
would
prefer
the
earlier
week
because
it's
the
beginning
of
a
milestone
seems
like
the
best
thing
to
start
it,
but
we'll
get
feedback
from
jinyoung
and
alexi
and
touch
base
with
camille
again
next
week,
all
right
for
team
five
retro
is
due
this
friday.
I
typically
have
to
nag
folks
beyond
the
deadline.
So
please
don't
make
me
nag.
A
B
Yeah
sorry
yeah,
it's
a
bit
bare.
I
just
added
this.
No
I'm
still
it's
taking
a
bit
longer
than
I
thought
it
would,
but
I'm
still
looking
into
yeah.
I
guess
observability
related
things
for
image
scaling.
This
kind
of
came
up
after
we
first
released
it
to
dot
com
in
discussions
when
I
first
added
runbooks
for
it
and
then
it
kind
of
spilled
over
into
other
issues
as
well
and
andrew
and
ben
are
quite
keen
on
getting
yeah
observability.
B
You
know
at
gitlab
at
large
and
to
shake
in
shape,
and
especially
if
we
add
new
features
on
top,
we
should
follow
the
way
we
do.
Observability.
E
B
Is
to
define
slos
and
then
define
indicators
based
on
metrics
that
we
have,
or
maybe
don't
have.
So
that's
what
I'm
working
on
to
make
sure
that
we
have
consistent
dashboards
where
this
can
all
feed
into
yeah.
B
This
is
kind
of
like
a
direct
result
as
well
of
I
kind
of
started
working
on
a
dashboard
that
was
just
meant
kind
of
mostly
for
us
to
see
like
how
the
scatter
performs,
and
you
know
simple
things
like
what
the
error
rate
is,
and
I
mean
you
realize
you
can't
even
measure
out
currently
what
the
error
rate
is.
So
there's
some
work
to
do
there
yeah.
So
that's
what
I'm
currently
looking
into
yeah.
C
Yeah
I
forgot
to
update
there,
like
the
only
work
that
has
left
at
the
moment
is
documentation,
so
I'm
currently
working
in
there
are
three
different
issues
about
writing
documentation.
So
I'm
currently
updating
the
documentation
for
query
recorder
and
yeah
this
week,
I
plan
to
fully
focus
on
like
finishing
those
documentation,
and
there
is
one
open,
mr
that
is
in
review,
so
when
it's
merged,
I
think
that,
from
the
development
point
of
like
we
can
wrap.
F
D
D
And
like
associated
mrs
for
them,
there
are
some
discussions
about
how
to
handle
the
rollout
template
and
how
to
handle.
Also
like
ops
issue,
I
mean
ops,
feature
flux
and
if
you
look
at
the
like
discussion
below,
there
is
like
a
few
discussion
points.
D
D
D
But
I
I'm
not
convinced
yet.
I
need
to
like
to
spend
more
time
on
like
this
discussion.
I
I
have
like
the
trade
general
question.
Like
did
you
find
the
rollout
issue
useful,
or
did
you
find
a
rollout
issue
like
like
extra
torque,
that,
like
you,
you
don't
really
want
to
do
when
you
have
like
the
main
issue
for
the
work
related
to
the
future.
A
D
When
running
on
the
scale
and
like
there
is
already
usually
so
many
different
details
in
the
issue
itself
that
we
it's
really
hard
like
to
comprehend.
Exactly
in
what
state
it
is
like
how
it's
being
designed
and
like.
If
we
add
like
a
set
of
graphs
about
related
to
the
rollout,
it's
gonna,
be
even
more
mess.
A
D
Yes,
this
is
this
is
the
example
of
the
rollout
issue,
but,
like
I
mean
like
this,
is
interesting
because,
like
it's,
the
it's
kind
of
affecting
production,
but
most
of
our
features
are
affecting
production
and
there
is
like
another
third
idea,
which
is
like
change
change
management
issue
which
you
create
as
part
of
the
infrastructure.
D
But,
yes,
did
you
create
change
management
issues,
or
did
you
were
not
exposed
to
that?
No,
I
never
heard
of
it
so
like
the
idea
of
the
trench
management
issue
is
like
the.
The
clear
target
of
the
trans
management
issue
is
fox,
running
infrastructure,
which
is
like
sre
and
like
people
running
the
service
to
kind
of
approve
and
understand
exactly
how
our
production
system
is
changing
but
like.
D
If,
if
you
look
at
the
feature
flag,
yammer
rollout,
it
would
not
fall
under
change
management
at
all,
because
it's
not
really
like
github.com
facing
challenge
either.
So
I
guess,
like
there
is
like
a
different
sides
of
the
rollout
which
is
like
you
may
want
to
roll
out
to
the
like.
Like
your
local
development
team,
you
may
want
to
roll
out
to
like
say
github.com,
but
you
may
also
want
to
roll
out
to
like
on-premise
customers,
which
is
like
enabled
by
default.
D
A
D
Yes,
so
so
this
is
the
issue
that,
like
xenia
created
today,
I
mean
like
from
what
was,
by
one
perspective,
it
kind
of
makes.
D
D
B
I
I
can
only
say
I
agree
with
you.
I
find
it.
I
find
it
much
cleaner
and
easier
for
me
as
well.
If
I
come
back
to
these
issues,
to
separate
has
the
actual
work
being
done
and
how
far
are
we
to
actually
shipping
this
to
everyone?
To
me?
B
That's
two
totally
different
shoes
and
yeah,
like
there's,
there's
often
like
a
lot
of
history
in
good
lab
issues
that
are
yeah
feature
for
the
product
focused
so
having
a
separate
issue
where
I
have
a
simple,
you
know
step-by-step
list
that
you
can
check
off,
and
I
can
just
look
at
this
and
at
a
glance
see
how
far
are
we
away
from
like
fully
rolling
this
out?
I
find
it
super
useful.
D
Really,
you
should
close
the
issue
as
soon
as
possible
and
create
a
follow-up
issues
to
like
to
roll
out
it.
So
those
like
how
do
you
find
the
information
about
whether
feature
is
roller
roll
it
out
or
not
whether
it's
enabled
by
default
or
not.
E
Product
product
managers
gave
up
on
that.
We
didn't,
we
didn't
know
so
so
the
answer
is
we
delegated
to
engineering
managers.
The
engineering
managers
merge
the
release
posts.
D
Because
I
I
I
think
this
is
like
the
the
main
aspect
really
that
I
wanted
to
solve
with
this
whole
process
like
create
a
like
a
anchor
punch
that
says
to
you
once
this
is
closed.
It
means
like
it's
fully
rolled
out.
It
means
that,
like
it's,
it's
ready
to
be
released
to
the
public,
because,
like
our
current
flow
is
like
you
close
the
main
issue
as
soon
as
works
is
kind
of
like
validated,
but
it
kind
of
creates
this
kind
of
like
chaos
between
something
that
is
not.
D
So
I
was
kind
of
thinking
that
maybe
like
one
of
the
ways
to
approach
that
is
like
excellent
release:
post
template
with
the
name
of
the
feature
flag
behind
the
given
feature
is
gated
and
like
you
as
a
pm,
you
just
go
to
a
page
that
shows
all
our
current
feature
flags
in
their
state,
and
this
is
your
information,
whether
you
know
whether
something
is
enabled
by
default
or
not.
D
This
is
like
your
gate,
and
this
is
your
only
place
of
the
information
and
generally
like
the
rollout
issue
tracks
how
far
we
are
into
the
process
of
making
something
on
by
default
because
like
and
there
is
like
another
confusion-
that
it's
really
like
chance.
Management
is
meant
for
the
github.com,
but
the
rollout
issue
is
really
like
managed
for
the
default
on.
E
Yeah,
it's
really
confusing,
for
I
think
forever
involved.
It's
not
cleared
it's
almost
engineering
manager.
Frankly,
it's
a
bit.
They
have
to
go,
try
and
track
it
down.
It's
my
experience,
so
it's
less
clear
the
product
manager
and
then
it's
even
less
clear
as
you
move
like
away
from
engineering
right
like
once,
it
gets
the
tams
and
the
essays
they
have.
You
know
they
see
the
issues
closed.
The
mrs
landed,
but
they
have
no
idea
if
they
can
tell
a
customer
hey.
You
know
this
thing
shipped.
D
So
like
from
your
perspective,
would
it
make
sense
like
to
accident
release
post
template,
like
with
the
link
to
the
feature
flag
or
like
the
feature,
flag
gate
that
is
behind
so
kind
of
like
making
you
as
a
person
to
look
at
the
future
flag
stage
to
know
whether
something
is
on
by
default?
Because,
like
from
your
perspective,
this
is
really
like
the
relevant
information
that,
if
there
is
feature
it's
on
by
default
or
if
there
is
no
feature
fact,
it
means
that
it
was
removed,
which
means
it's
on
by
default.
Indeed,
in
that
case,.
E
Yeah,
it
might
then
make
into
the
release,
but
it
could
be
in
the
resource.
Template
that'll,
make
it
very
clear
and
transparent,
but
also
just
having
like,
maybe
in
the
mr
somewhere
or
something
like
that
like
this
is
this
is
the
feature
flag
name,
that's
going
to
be
utilized
for
controlling
the
rollout
and
then
we
all
can
just
know
where
to
go
check.
I'm
not.
We
generally
have
avoided
trying
to
expose
the
whole
feature
flag
process
to
customers,
because
it's
we
haven't
built
any
any
ui
in
get
lab
self-managed.
E
As
far
as
understand
like
managing
the
future
flag
states,
it's
always
kind
of
been
like
a
internal
implementation
detail
and
on
rare
occasion,
we've
exposed
it
like
for
extra
experimental
features
like
the
early
days
of
gili
cluster
and
things
like
that.
But
you
know
today
we
haven't
waited
for
the
customers,
but
I
think
it'd
be
great
to
have
on
the
mr
have
like
a
feature
name
somewhere
put
down
defeat
the
feature
flag,
name
and
then
yeah
we
could
put
the
raceways
template.
We
might
not
expose
it.
E
I
don't
think,
but
that
would
be
a
nice
source
of
truth
to
understand
easily
where
something
is
when
you're
determining
whether
release
postal
administrator
emerged
or
not,
it
just
might
not
get
presented,
probably
probably
not.
A
D
Yes,
that's
exactly
the
idea
that,
like
right
now
like
we
have
the
template,
that
gives
you
some
structured
way
yeah
what
to
expect.
When,
with
the
rollout
of
the
given
feature
flag-
and
this
is
like
the
anchor
point
I
mean
by
design-
it
should
be
the
anchor
point
to
the
pm
to
understand
how
far
we
are
in
the
rollout
of
the
of
the
given
feature,
because,
like
this
rollout
issue
is
directly
connected
with
the
mr
that
is
or
like
the
feature
plug
that
is
introducing
so
and
like.
D
If
you
look
at
the
steps
below
one
of
the
steps
is
like
removal
of
the
future
flag
and
technically,
the
intent
of
this
rollout
issue
is
like
it's
like
it's
something
to
be
scheduled
by
engineering
manager
as
well,
because
you
want
to
remove
the
feature
of
like
as
soon
as
possible
as
soon
as
you
confirm
that
everything
is
working
because
it's
a
technical
debt.
So
the
rollout
issue
is
really
about
like
running
out
the
future
and
removing
that
feature
flag
from
the
code
base.
D
So
having
like
only
a
single
path-
and
this
is
exactly
like
the
the
last
two
steps
of
this
template
like
you-
want
to
really
clean
up
after
yourself
to
not
have
this
matrix
of
the
combinations
that
we
have
today
and
and
like
this
tries
to
like
describe
the
whole
process
of
the
testing
the
future
flag.
Maybe
not
not
the
best
way,
how
it's
like
structured
today.
D
D
The
intent
of
this
issue
is
like
it
looks
at
the
whole
process
of
the
running
out
of
the
future
flag
but
like
there
is
also
like
infrastructure
interested
in
the
change
management
issue,
which
is
like
more
specific
for
like
enabling
some
very
specific
beat
on
the
gitlab.com
in
a
specific
time
period
and
like
this
kind
of
like
slightly,
creates
a
confusion
on
how
to
handle,
like
this
whole
rollout
of
the
future
flag,
and
the
proposal
says
like
let's
remove
this
rollout
issue
completely.
D
Let's
move
that
into
original
issue,
and
let's
keep
this
rollout
as
part
of
the
original
issue,
which
kind
of
it's
also
sketchy,
because
like
would
it
mean
that
this
issue
stays
long
as
strong
as
we
don't
enable
this
feature
flight
by
default,
I
mean
this
is
not
usually
the
way
how
we
work
like.
We
usually
tend
to
close
the
issue
as
soon
as
we
finish
this
milestone
and
like
we
create
a
follow-up
issues
with
the
next
steps
which
could
be
like
enabled
by
default,
and
this
is
exactly
the
the
raw.
A
A
Now
I
agree
with
you,
I
think
the
rollout
template
makes
sense
to
me.
It's
it's
a
specific
purpose
for
what's
happening
with
the
future
flag,
whereas
the
issue
is
separate,
it's
conversation
for
how
and
what
is
being
created,
so
I
will
read
and
throw
in
my
two
cents
later
on
from
the
issue,
but
thanks
for
bringing
it
up.
E
I
agree
right
now,
there's
no
sort
of
truth
for
what's
going
on
with
the
feature
flag,
usually
or
it's
just
spread
to
like
a
bunch
in
for
issues
or
something
like
that,
it's
it's
really
hard
to
understand.
What's
going
on
and
so
having
a
single
place
to
look
to
understand
what
the
process
is
and
what
our
goals
are
and
success,
metrics
for,
like
continuing
on
a
rollout,
would
be
really
interesting.
D
There
is,
there
is
also
another
challenge
like
it's:
it's
like
developers
or
like
engineers.
They
don't
really
spend
a
lot
of
time
on
describing
the
success
criteria
and
like
and
like
I
mean
in
general,
like
if
you
kind
of
doing
rollout,
you
should
know
exactly
what
matrix
you
are
looking
at.
You
should
know
exactly
the
outcome
and
like
even
if
you
crack
look
at
this,
like
associated
rollout
issue
of
the
canary
ingress
this
one
and
like
scroll
up
like
there
is,
there
is
a
section
which
says:
what
are
we
expecting
to
happen?
D
What
may
happen
if
this
goes
wrong?
What
can
we
monitor
to
detect
problems
with
that
and
it's
it's
blank
like
that's
the
whole
purpose
of
the
rollout,
like
you
document,
not
for
you,
because
you
know
that
you
document
that
for
anyone
else
that
may
be
interested
in
that
information,
so
this
rollout
issue
is
not
for
you.
This
rollout
issue
is
for
everyone
else
to
understand.
D
If
you
are
not
around
how
to
deal
with
this
feature
or
like
how
to
roll
back
or
like
what
patterns
you
are
looking
at,
if
something
is
is
going
off,
is
it
like
this
thing
that
is
causing
this
being
off
so
this
is
I
mean
this
is
also
the
terms
like
with
the
rollout
issue
like
we
don't
really
take
the
rollout
issue
very
seriously
to
kind
of
spend
enough
time
to
feel
the
details
about
the
rollout
issue
in
kind
of
providing
this
comprehensive
information
for
other
people,
and
it
could
be
also
interesting
like
to
support
team
later.
D
If
they
see
this
rollout
issue,
they
could
look
at
these
graphs
and
understand.
I
saw
that
pattern
as
part
of
this
row
rotation.
Maybe
this
is
this-
is
this
flag
that
is
enabled
by
default
that
is
causing
these
behavior
terms,
so.
A
D
Yes-
and
there
is
another
issue
in
the
improve
epic,
which
says,
create
a
bot
to
validate
the
rollout
issue
completeness
and,
as
part
of
completeness,
could
be
also
like
linking
and
cross-linking
all
the
relevant
pieces
of
the
information
because,
like
like
my
perception
is
like
now,
you
go
to
the
dogskitlap.com
page,
where
you
see
all
the
future
flags,
what
is
their
current
state
and
maybe
what
change
between
11,
13.4
and
13.5?
D
D
What
to
look
for
and
like
from
the
mr,
you
can
usually
very
easily
go
to
the
original
issue,
but
if
you
go
to
our
original
issue,
it's
really
hard
to
find
information
about
like
the
rollout
and
how
this
feature
like
is
being
rolled
out.
So,
at
least
from
from
my
perspective
like
asking
mr
and
the
rollout
issue,
the
intent
for
that
was
really
like.
D
Like
two
types
of
people
like
pm,
sorry,
three
types
of
people
pm
sre
and
support
because
they
can
really
go
to
the
docs.
They
can
quickly
find
the
relevant
feature
flag,
find
exactly
which
mark
got
introduced
if
they
really
need
to
check
and
find
the
rollout
issue.
If
the
rollout
issue
and
all
boxes
has
marked,
it
means
that
this
feature
is
like
complete.
It's
fully
rolled
out
and
like
it's,
it's
like
done.
Of
course
like.
D
Maybe
we
could
still
like
clean
the
the
epic
or
maybe
original
issue
for
completeness,
but
this
kind
of
information
can
be
taken
from
the
mr
context.
Milestone
can
be
taken
from
the
nmr
context
to
which
the
given
feature
flag
was
assigned,
like
the
group
we
already
put
but
like
from
the
git
log.
D
So
it's
kind
of
a
little
of
the
backstory
that
I
was
kind
of
thinking
behind
mr
rollout
group
and
like
how
these
different
pieces
of
the
information
play
together
and
and
how,
like
you,
can
make
like
josh
or
grant
to
have
this
like
very
specific
information
who
to
reach
to
understand
the
given
feature
flag
and
like
how
it's
being
rolled
out?
What
are
the
metrics?
You
are
looking
for
and
things
like
that,
unlike
how
it
changed
between
different
releases
and
as
well,
how
it
turns
on
the
github.comgithub.com
stability.
D
A
C
C
So
you
can
follow
the
like
rolling
out
process,
which
would
be
really
difficult
to
follow
in
the
main
issue,
because
it
contained
a
lot
of
different
things.
So
I
look
at
the
rollout
issue
as
some
to-do
list
that
you
can
like
go
and
check
and
place
the
metrics
and
you
can
always
check
easily
the
current
status
or
you
can
find
in
comments
like
this.
C
Disable
it
and
follow
up
like
the
status
and
everything
else.
So
it's
much
more
clear
to
me,
especially
for
like
some
complex
issues
like
atomic
possessing
laws,
maybe
for
some
small
issues
they
could
be
integrated
into
the
one,
but
for
the
more
complex
issues
that
have
a
lot
of
conversation.
I
think
my
personal
opinion
is
it's
better
to
have
one
place
to
follow.
D
D
By
definition
and
like
you're
gonna
close
this
issue
fairly
quickly
as
well
so
like
the
cause
of
the
creating
the
t-shirt
is
like,
is
fully
related
to
the
complexity
of
the
feature
flag.
D
So
like
we
already
imposed
like
a
very
strict
sorry,
not
very
sweet
like
at
a
different
like
behavior
of
the
rolling
of
the
future,
because
how
our
system
is
beard
and
like,
like
the
information
that
you
are
providing
here,
not
providing
it
for
yourself,
because
you
don't
already
know
all
the
details
you're
providing
that
for
for
everyone
else.
That
may
be
affected
by
that.
D
So
I
I
need
to
write
a
few
words.
I
just
need
to
figure
out
exactly
which
one.
A
Okay,
jump
back
to
the
agenda
here.
We
got
through
everything,
continue
copied
his
update
on
wow,
sorry
investigating
the
memory
league
for
sidekick
killer
and
looking
into
z-lib.
So
we
can
read
up
on
that
later.
A
Unless
there
is
anything
else
to
talk
about,
I
want
to
go
back
to
a
couple
of
things
so
on
the
the
synchronously
week
to
talk
about
gitlab
two
gigabyte
footprint
it
it
does
depend
on
us
wrapping
up
thirteen
six.
I
wanna
make
sure
that
we
finish
all
the
things
in
flight.
The
documentation,
including
like
the
blog
post
for
image,
resizing
the
documentation
for
cached
queries.
A
Those
are
important
to
get
out
there
and
finish
up,
so
they
don't
continue
to
linger,
along
with
the
underlying,
like
image,
resizing,
making
sure
that
wraps
up
and
the
implementation
for
cash,
sql,
queries
and
anything
else.
Well,
the
majority
of
what's
in
flight.
So
is
anybody
concerned
about
what's
currently
in
the
13
6
timeline
and
the
ability
to
close
it
out
within
the
milestone.
B
I'm
not
totally
sure
like
there
hasn't
been
really
any
movement
on
the
caching
related
stuff.
So
one
thing
that
would
be
useful
to
have
was
to
get
like
an
updated
figure
for
cash
hit
rates
in
production
based
on
cdn
logs.
That
was
something
that
igor
had
helped
us
out
with
in
the
past.
So
I
created
an
issue
for
this
in
the
infra
issue
tracker,
but
there
hasn't
been
any
movement
yeah
that
so
we're
kind
of
still
waiting
on
that.
B
So
that's
kind
of
why
I
switched
back
to
the
observability
stuff
yeah,
because
the
main
problem
with
this
issue
is
that
it's
not
really
clear
what
the
goal
is
here
right,
because
we
don't
really
know
what
what
do
we
need
to
do
like
how
bad
is
the
problem
if
we
would
go
ahead
without
any
kind
of
caching
solution,
and
it's
kind
of
it
ties
in
as
well
with
what
alexei
started
working
on
before
he
left
for
pto,
which
was
to
verify
performance
on
self-managed
that
has
stalled
as
well.
B
So
yeah
I
don't
know
either
we
can
try
to
like
pick
these
up
in
parallel,
so
that
we
can
make
progress
on
this
during
the
next
week.
Yeah
wait.
So
did
you
say
until
the
end
of
13.6?
So
that
is
that's
still.
B
E
B
A
bit
of
a
question
mark
right
now,
even
what
the?
What?
What
is
the
goal
here
you
know
and
then,
because
only
that
will
inform
what
we
have
to
do.
There's
a
couple
like
smaller
things
we
could
just
do,
but
if
we
just
just
do
them,
it
kind
of
goes
a
little
bit
against
the
idea
that
we
verify
that
any
performance
improvements.
B
E
That,
as
far
as
the
goals
yeah,
I
I
think
we
just
want
to
try
to
exit
13.6
with
with,
like
with
an
understanding
of
how
this
behaves
and
self-managed.
So
we
can
either
have
a
release
post,
which
says:
hey.
E
You
know
you
know
beta
for
self-managed
or
just
fully
ga
and
not
by
default
for
self-managed
right,
because
I
don't
know
we
don't
want
to
release
this
on
by
default
and
then
end
up.
You
know
leading
to
degraded
instances
on
some
of
the
more
heavily
utilized
instances
because
of
a
you
know:
100
percent
resizing
rate,
so
that
that's
my
main
concern.
So
just
maybe,
I
think,
from
the
double
click
one
level
deeper.
E
The
question
is
really
like
for
self-managed
without
the
use
of
seems
like
two
two
cdn,
so
we
haven't
calm
right
now.
What
is
the
difference
in
cpu
consumption
right,
but
with
it
on
and
off?
E
And
I
think
if
that's
within
you
know
five
percent
of
each
other
or
ten
percent
each
other
right
like
in
absolute
terms.
It's
probably
fine.
If
we're
talking
about
something
more
substantial,
then
then
we
probably
need
to
provide
guidance
still
release
it.
I
think
we're
going
to
try
and
provide
guidance
to
people
as
far
as
what
to
expect.
B
No,
that
sounds
totally
sensible.
I
I
think
so.
The
last
time
I
looked
at
that
issue
about
verifying
performance
for
yourself,
man
I
should
sounded
like
alexia-
was
working
with
grant
on
this,
but
I'm
not
entirely
sure
of
that.
Actually,
looking
at
this.
E
E
E
And
so
you
kind
of
need
like
a
browser
to
go.
Do
that
or
you
have
to
go
in
and
tell
k6
to
like
specifically
download
certain
urls
as
like
asset
downloads,
but
you
have
to
tell
them
exactly
what
those
you
get
what
to
see.
Where
else
are
so
that
might
be
fine,
I'm
not
sure
what
the
path
is,
but
I
think
the
problem
to
solve,
which
is
to
do
like
more
of
an
actual
browser-based
load
test
versus
a
fire
and
forget,
like
http,
requesting.
A
So
josh,
if
we're
unable
to
validate
the
impact
on
self-manage
within
the
next
couple
weeks,
say
give
us
a
week
lead
time
before
the
actual
release
date.
What's
the
plan,
then
do
we
ship
it
as
default
off
or
do
we
ship
it
as
on
and
give
them
the
documentation
saying
hey?
If
you
see
a
cpu
increase
on
this,
then
just
go
ahead
and
turn
it
off
using
this
ops
flag.
E
Yeah,
I
said
I
think
we
want
to-
I
think,
if
we,
if
we're
still
stuck
as
far
as
figuring
out,
how
to
understand
what
the
impact
is
more
broadly,
it's
probably
worth
adding
a
configuration
flag
on
this,
because
I
I
don't
think
feature
flags
are
usable
enough
to
really
expect
our
users
to
like
deal
with.
It
is
my
sense,
and
so
it
might
be.
E
B
Right,
no,
I
was
thinking
because
we
have
an
open
issue
for
for
something
that
I
didn't
know
we
had
before.
Actually,
which
are
what
do
we
call
them?
Camille
like
operational
feature,
flags
or
something
they're
feature
flags
right,
but
they're.
Basically,
just
these
perpetual
feature
flags
that
we
never
lift
and
they're
just
like
in
a
simple
way
of
like
turning
things
on
and
off
that
sit
more
at
the
infrastructure
level,
so
that.
B
We
haven't
done
before
and
I
think
the
idea,
and
not
just
for
self-managed,
but
also
for
dot
com.
It's
always
good
if
you
have
like
a
like
a
safety
switch.
So
so
we
there
was
a
start
to
convert
this.
These,
like
two
layered
rope
more
like
rollout
oriented,
feature
flags
that
we
have
into
one
of
these
yeah
like
operational
infrastructure
flags.
That
would
just
stay
there
forever.
E
Yeah
I
mean
we
could
expose
it
as
a
ui
settings
option
as
opposed
to
config,
and
then
it's
easier
to
just
sort
of
flip
back
and
forth.
I'm
not
sure
if
that
would
work,
I'm
not
sure
like
what,
whether
we
can
have
a
runtime
like
application
based
setting,
which
then
changes
the
you
know
whether
this
works
or
not.
B
Know
this
reminds
me
we
kind
of
have
a
way
already
to
turn
off
image
scaling,
at
least
on
the
way
out
from
workhorse,
because
we
have
a
we
have
this
piece
of
config
and
then
that
those
are
like
two
of
the
sorry.
Only
one
is
open
cell.
There
is
like
an
openmr
that
I'm
still
waiting
for
feedback
on
which
changes
the
helm,
charts
to
add
this
config,
which
is
just
this
number
that
we
said
it's
like
a
cap
for
the
maximum
number
of
scalar
processes.
B
Well,
actually,
what
more
correctly
said?
The
it's
a
cap
for
the
number
of
parallel
requests
that
are
allowed
to
be
in
this
in
the
system
simultaneously
for
image
scaling
it's
it's
again,
another
safety
switch
and-
and
I
actually
tested
this-
it's
totally
fine
to
set
this
to
zero,
and
this
will
just
reject
any
image
scanning
or
request
immediately
and
fail
over
to
serving
the
original
image.
B
It's
not
as
clean
as
being
able
to
turn
it
off
via
feature
flag,
though,
because
this
actually
will
still
instrument
the
code
path
of
the
scalar,
and
it
has
like
different,
like
caching
behavior,
for
instance,
that's
something
we
found
during
the
caching
epic
so
but
it
is
kind
of
a
safety
switch.
So
we
we
could
also
just
document
that,
and
you
know,
tell
users
just
as
a
fallback.
You
know
if
we
don't
want
to
add
another
toggle.
E
Yeah,
it's
you
know,
it's
yeah
we're
not
trying
to
avoid
configuration
if
at
all
possible,
but
if
it's,
if
for
some
reason
this
took
us
a
long
time
trying
to
figure
out
like
what
the
safety
is,
we
can
also
just
it's
not
great
but
we're
essentially
putting
the
risk
on
our
customers
to
be
like
yeah.
You
can
try
it
and
then
you
can
turn
it
off
if
it
like
blows
up
your
instance,
which
you
know
I
I
I
be
cursing
your
thoughts.
E
My
guess
is
it's
probably
not
going
to
blow
things
up
because
of
browser
caching,
but
I
don't
I
don't
know
so
I
I
feel,
like
you
know,
if
we're
really
concerned,
we
should
just
not
even
tell
people
about
it
until
we
have
more
confidence
in
it.
E
B
I
think
I'm
with
you
there
from
like
just
a
few
gut,
feel
the
perspective
that
I'm
not
too
concerned
about
it
because
of
brother
caching,
because
we
do
we
do
write
on
a
max
h,
cache
control
header
still
in
its
300
seconds,
so
so
it
will
be
cached
for
what
is
that
five
minutes.
B
If
you
have
a
well-behaved,
http
client
and
they
actually,
which
browsers
should
be
right.
E
B
Wait,
that's
exactly
we
do
that!
Yeah!
That's
that's
part
of
I.
I
think
I
guess
I
didn't
mention
this
explicitly
earlier,
but
that's
basically
those
two,
mrs
that
I
took
over
from
alexei
before
I
left
on
pto
there
for
omnibus
that
is
already
emerged.
I
I
started
the
mr
for
workhorse
is
in
maintainer
review,
which
defaults
it
to
exactly
this
and
in
helm.
That's
openmr!
B
It's
a
bit
trickier
because
the
notion
of
cpu
cores
doesn't
really
make
sense
in
kubernetes
because
you
work
with
resource
limits,
so
we
have
to
approach
it
a
little
bit
differently,
but
we
also
set
it
to
a
low
value
in
helm.
So
so
that
would
be.
That
is
that
we
do
anyway,
so
sure
that's
just
the
safety
toggle
right.
B
So
yeah
like
it
protects
us
as
well
like
if,
if
you
have
for
some
reason,
image
scanning
should
be
should
be
super
slow
on
a
customer
installation,
for
I
can't
even
come
up
with
a
good
reason,
maybe
maybe
because
this
system
otherwise
is
under
load,
then,
if,
for
some
reason
too,
many
of
these
scatter
requests
would
be
kind
of
in
flight
and
not
finished
quickly
enough
and
a
new
one
would
come
in
that
would
exceed
this
threshold.
We
we
already
we've
had
that
logic
from
day
one
it
will.
It
will
be
rejected.
B
A
Okay
looks
like
camille's
gotta
drop
off.
I
need
to
drop
in
a
few
minutes
too,
but
we
have
I
listed.
Our
link
to
the
ops
flag
issue
sounds
like
we've
got
some
thread
safety
in
there
and
seems
like
the
ops
flag
right
now
until
we
can
actually
confirm.
A
A
E
D
So,
like
ops,
flag
is
really
like
meant
to
be
a
fair,
safe,
like
the
last
wizard
file
safe.
If
something
goes
directly
wrong,
you
can
dynamically
disable
the
given
feature
and,
like
you,
have
instant
relief,
but
like
development,
you
should
remove
or
migrate
to
ops,
but
ops
kind
of
indicates
that
we
don't
kind
of
intent
on
removing
this
flag
and
at
our
near
future.
D
So
like
from
our
perspective,
it's
like
a
calculated
technical
debt
that
we
are
taking,
basically,
as
for
the
performance
aspect
that
like
we
may
be
fully
on
not
fully
certain
of
but
like.
We
also
don't
want
to
like
to
say
that
this
is
regular
development,
because
it's
not
really
regular
development
like
it's
rather
like
a
failsafe
mechanism
that
allows
you
to
dynamically
reconfigure
the
system,
because
materials
like
your
change
with
the
workhorse.
It
actually
requires
self-change,
upgrading,
all
workhorse
nodes
or
kubernetes
nodes.
Basically
your
ports.
D
It's
it's
takes
a
lot
of
time
and
targeting
feature
fight.
It's
like
basically
almost
instant
thing
to
do
so.
This
is
really
ops
man
for
like
long-living
future
flight.
That
may
that
may
provide
a
relief
if
we
find
something
being
super
in
performance,
and
it
allows
us
to
react
that
instantaneously.
Basically,
but
from
our
perspective
like
it's,
our
calculated
risk
that
that
we
kind
of
keep
these
features
like
for.
For
these
reasons,.
B
D
Yeah,
no,
I'm
I'm
not
saying
either
that
they
should
replace,
but
this
is
like
the
like
the
fundamental
thinking
behind
that
oops
that
if
it's
something
that
like
we
want
to
kind
of
kill
up
from
our
backlog
of
the
future
flux,
we
should
migrate
them
to
the
ops
and
kind
of
acknowledge
that
this
is
something
that
we
are
keep
intent
to.
Keeping
this
long
term
really
without
any
expiration
date.
A
A
All
right-
and
I
was
gonna
talk
a
little
more
about
the
retros,
but
since
we're
running
out
of
time
I
will
just
add
my
thoughts
asynchronously.
So
thanks
everybody
for
the
feedback.
Good
meeting
happy
monday.