►
From YouTube: 2020-09-15: Gitaly cgroups catchup
A
All
right
so
we're
talking
about
c
groups,
so
so
there
were,
there
were
a
few,
so
we
were
going
to
review
what
the
so
we
wanted
to
review
what
the
the
new
improved
requisites
are
before
we
can
go
to
production
and
touch
bases
on
how
we're
both
feeling
about
the
the
go
to
production
approach.
A
Why
don't
we
start
with
the
overall?
How
do
you
feel
about
the
approach.
B
Yeah
so
to
to
make
sure
I
understand
the
approach.
So,
basically,
what
the
plan
is
to
do
is
instead
of
rolling
it
on
Canary
and
then
on
a
short
basis.
We're
gonna,
configure
the
c
groups
so
quickly
just
create
sold
to
see
groups
and
then
we're
gonna
use
the
feature
flag
as
a
percentage
rollout.
For
example,
one
person,
ten
percent
throughout
the
whole
Fleet
right,
but.
A
I
but
yeah
I
felt
like
that,
was
probably
the
safest
one.
So
yeah
go
ahead
and
talk
about
what
what
your
feelings
are.
Yeah.
B
At
first
I
was
a
little
bit
against
it
simply
because
I
feel,
like
both
Canary,
for
example,
HDD
and
Marquee
are
gonna,
have
different
workloads,
so
they
might
have
different
c
groups
limitations
that
we
might
want
to
put.
A
In
yes-
and
let
me
address
that
right
now,
I
am
totally
planning
on
on
having
so
I
I.
Don't
need
to
have
it
in
production
to
do
the
initial
calibration
and
I'm
expecting
Canary
to
have
different
thresholds
to
have
different
sizing
sizing
limits
because
of
its
and
I'm
planning
on
doing
the
same
analysis
for
for
all
of
them
for
for
all
of
those
categories
that
we
talked
about
before
so.
A
That'll
be
baked
into
the
the
chef
Mr,
so
this
so
the
feature
flag
would
become
an
on
off
switch,
but
the
the
HTT
and
the
Marquee
and
the
the
the
the
main
Fleet
in
Canary
would
all
potentially
have
different
thresholds
depending
on
okay.
A
Yes,
yes,
okay!
Yes,
assuming
that
the
data,
the
the
usage,
the
historical
usage
Trend,
suggests
that
so
that
that
was
one
piece.
I
thought
maybe
wasn't
clear
that
I
wanted
to
just
highlight.
B
Okay,
I'm
glad
I'm
glad
we're
in
agreement
there
and
now
I
I,
guess
that
makes
sense
to
do
it
all
in
one
go
because
then
we
get
get
more
data
and
we
don't
mess
around
with
enabling
it
on
one
Fleet.
A
A
Yeah
I
was
really
surprised
at
the
you
know
the
the
feature
flag,
support
in
giddly-
it's
just
not
there,
it
doesn't
it
doesn't
it
doesn't
support
actors
I
wanted
I
was
doing
some,
some
some
just
sanity
checking
and
it
I
I
realize
that
it
doesn't
work
any
anything
close
to
the
same
way
in
in.
B
A
A
A
Yes
same
here,
okay
same
here
and
it
it
does,
it
does
at
least
as
I'm
thinking
of
it.
It
does
still
give
us
a
a
rapid
way
to
turn
it
off
and
I
was
I.
Think
I
was
chatting
with
Rachel
a
few
days
ago.
If
memory
serves
about
the
the
consequences
of
turning
it
back
up.
So
if
we're
in
a
pinch
and
I
wanted
to
talk
through
this
with
you
as
well,
just
to
so
Cindy,
you
took
me
on
this.
A
So
if
we've,
if
we've
been
able
to
so
pretend
that
we've
enabled
c
groups,
it's
been
enabled
for
I'm
just
going
to
make
some
stuff
up,
it's
been
enabled
for
an
hour
and
we
start
to
get
some
out
of
memory.
Errors
on
one
machine
say:
file
75.
yeah,
rather
than
try
to
you,
know,
isolate
the
problem
on
that
machine.
I
think
we
just
cut
the
feature
flag
off
globally
and
I
think.
A
The
effect
of
that
is
because
the
because
diddly's
implementation
is
effectively
to
to
not
call
the
ad
command
method
for
to
the
on
the
secret
manager.
A
If
that
feature
flag
is
off
that
really
the
the
time
Horizon.
For
the
future
flag,
taking
effect
is
probably
measured
in
seconds,
however
long
it
takes
for
gitly.
To
kind
of
you
know
refresh
its
cache
for
the
state
of
the
feature
flag,
yeah.
B
B
A
A
B
A
A
And
with
with
regard
to
the
frost
and
cache
Pages,
they
will
remain
charged
to
the
c
groups,
but
but
because,
because
of
the
nature
of
the
cache
being
kind
of
a
shared
entity
host
level
pressure
will
still,
you
know,
be
perfectly
free
to
evict
those
pages.
So
it's
not
like.
A
Are
pinned
in
memory,
so
even
even
that
aspect
of
it
you
know,
while
a
little
bit
harder
to
reason
about,
doesn't
require
any
kind
of
tuning
to
to
make
it
work
kind
of
the
way
you
expect.
A
So,
if
there's,
if
there's
so
just
to
kind
of
play
out
an
example
if
we,
if,
after
we've
turned
back
off
the
feature
flag,
say,
there's
a
really
greedy
process
that
wants
to
allocate
you
know
20
Gigabytes
of
anonymous
memory,
and
now
it's
running
it
in
the
the
old
system,
D
managed
C
group
along
with
Italy
and
and
that
at
that
point
Colonel
is
going
to
say
well,
I
don't
have
20.
Gigs
of
you
know.
Free
memory
lying
around
I've
got
to
open
Cash.
B
A
Still
be
able
to
free
up
memory
by
kicking
up
pages
from
that
were
charged
to
the
c
groups,
just
just
in
the
same
way
that
today
it
does
for
for
the
the
old
90
gigabyte
secret,
so
that
behavior
doesn't
change
it's.
It
remains
that
yeah
that
that
remains
perfectly
viable,
and
that
was
the
only
other
gotcha
I
want
I
needed
to
kind
of
you
know
talk
through
to
kind
of
reason
my
way
into
yes.
This
would
be
okay,
too,.
B
Got
it
yep,
I,
think
that
makes
sense
and
I
agree
with
that
I
I,
I
kind
of
feel
like.
Oh,
we
should
have
gone
with
our
original
approach,
just
enabling
Ketone
Canary
and
go
hundred
percent
future
flag
wise,
but
I
do
feel
like
having
a
ripcord
with
feature.
Flags
is
a
lot
safer
in
that
sense,
so
I
agree
with
the
throwout.
A
I
think
that's
perfectly
fine,
so
there
were
kind
of
two
aspects
of
that
I
wanted
to
chat
about.
Just
briefly
so
starting
with
one
percent
is
really
just
kind
of
you
know
almost.
A
So
I,
don't
I,
don't
expect
to
leave
it
in
the
one
percent
state
for
very
long
and
then
we
move
it
up.
To
kind
of
you
know
a
more
substantial
percentage
where
we
actually
get
to
see
some
action.
A
I
wanted
to
talk
about
two
two
things:
one
is
expectations
and
for
for
what
the
what
the,
what
what
the
results
are
are
likely
to
look
like
and
two
was
I,
was
a
little
bit
tempted
to
split
it
into
separate
change
requests.
But
there's
there's
so
much
paperwork.
I
figured
just
pile
it
on
to
one
change:
request.
B
Yeah
yeah,
we
can
always
move
it
to
in
progress
and
then
back
to
that's.
A
Great
that
sounds
great
okay,
so
right
so
what
to
expect
so
having
a
percentage
of
the
time
roll
out
we'd
expect
to
have
the
growth
of
the
memory
usage
of
the
c
groups
will
be
slower
proportional
to
what
percentage
we've
set.
A
A
Yeah
got
it
yeah,
it's
a
it's
a
it's!
It's
really
like
Anonymous
pages
are
like
when
process
you
know
Maliks
a
page,
for
example,
and
it's
it's
kind
of
historical
nomenclature.
So
there's
far
back
to
Pages,
which
are,
as
the
name
suggests,
backed
by
it,
you
know
it's
it's
a
block
from
a
file,
that's
loaded
into
the
into
the
page
cache
which
we
now
call
file
stream
cache
and
then
on
Anonymous
pages
is
everything
else
everything
that's
not
filed,
but
yeah.
A
If,
if
your
program
does
a
read,
Cisco
that'll
load,
the
that'll,
you
know
read
the
file
in
you
know
into
the
file
system
cache
and
then,
if
it
wasn't
already
there
from
disk
and
then
the
the
the
program
is
just
accessing
pages
in
The,
Fast
and
cache
very
similarly,
if,
if
a
file,
if
sorry,
if
a
program
and
Maps
a
file,
then
that's
effectively
saying
within
the
context
of
the
process,
this
virtual
address
space
whenever
I
access
this.
You
know
this
virtual
address.
A
It's
going
to
correspond
to
this
page
of
the
file,
and
this
virtual.
This
virtual
address
corresponds
to
that
page
in
the
file
and
kernel
is
free
to
kick
out
just
like
just
like
normal
file
back
pages,
the
the
kernel
is
free
to
kick
those
out
of
the
file
system
cache
whenever
it
wants.
It's
just
that
whenever
the
process
goes
to
access
that
virtual
memory,
there'll
be
a
address
that
that
corresponds
to
a
page
in
the
file.
Colonel
will
look
at
it.
A
That'll
trigger
a
page
fault
in
in
kind
of
the
normal
control
flow
kernel
will
load
that
page
back
in
from
disk
if
it
wasn't
already
there
and
then
let
the
application
resume
just
as
though
nothing
can
happen,
so
it
works.
It's.
These
are
just
kind
of
different
facets
of
how
the
how
the
kernel
will
will
it
effectively
manage
the
file,
some
cash
yeah,
so
with
risk
with
regard
to
c
groups.
A
I
I
think
you
know
this,
but
I
just
want
to
recap,
because
it's
because
it's
it's
relevant
to
this
part
of
the
the
chat
any
any
page
in
the
page
cache
whether
it's
Anonymous
or
file
back,
but
mostly
we're
talking
about
the
fasting
cashier.
Any
page
is
going
to
be
charged
to
at
most
one
C
group.
A
If
we're
not
using
c
groups
at
all,
then
the
root
C
group,
slash
will
accrue
them
all
today,
right
now
in
production,
as,
as
you
know
very
well,
we've
got
the
system
B
mandency
group,
it's
sized
at
90,
90
gigabytes,
although
the
file
Sim
cash,
you
know
almost
all
of
the
pages
in
the
file
Sim
cash
get
charged
to
that
c
group,
and
that's
actually
the
the
fact
that
that
c
group
is
noticeably
smaller
than
the
host
level
memory
capacity.
A
So
this
is
this
is
actually
why
that
host
is
in
kind
of
an
unusual.
Those
hosts
are
tend
to
be
in
the
unusual
state
of
having
a
relatively
large
number
of
gigabytes
of
free
memory,
because
almost
everything
that
we're
doing
that
touches
that
large
file
system
is
running
inside
that
c
group,
that's
got
a
90
gigabyte
limit
and.
A
Cache
you
know
any
any
Pages,
you
know
any
any
files
that
they
access
get
charged
to
that
c
group,
because
nobody
else
is
actually
accessing
those
pages
and
we
can
load
that
up.
For
example,
if
we
run
I
don't
know
like
if,
if
we
cat
one
of
these,
you
know
large
files
outside
of
that
c
group,
then
that'll
go
in.
You
know
into
the
root
c
groups.
Anybody
I
think
you
get
what
I'm
saying
yeah
yeah.
A
So
so,
once
we
so
here's
so
circling
back
to
what
things
will
look
like
once
we
enable
these
c
groups,
so
each
of
the
let's
just
say
1000.
For
now,
each
of
the
1000
per
repo
c
groups
is
going
to
independently
accumulate
pages
in
The,
Fast
and
cache
Pages.
A
That's
gonna
happen
unnaturally
slowly
for
probably
the
next
few
weeks,
because
most
of
the
on
many
of
these
giddly
hosts
most
of
the
faustin
cache
pages,
are
already
charged
to
a
different
C
group,
the
original
C
group,
and
so
our
our
git
commands
that
are
now
going
to
be
running
inside
their
own
per
repo
c
groups.
Would
they
have.
A
A
So
you've
already
got
all
of
the
pieces
so
that
this
will
be
quick
with
anybody
else.
It
would
take
longer.
So
you
know
these
perverty
Boosie
groups
are
going
to
be
short-lived.
A
The
order
of
I
don't
know
like
hours
to
days,
because
we
do
deploys
very
often
anytime,
so
anytime,
we,
we
do
a
goodly
restart
and
the
old
giddly
processes.
Pro
repo
C
grips,
get
destroyed
there,
whatever,
whatever
Frost
and
cash
pages,
had
been
charged
to
those
c
groups.
When
that,
when
those
old
peripro
seed
groups
get
deleted,
their.
B
A
A
B
A
We're
not
I,
don't
know,
that's
a
good
question,
I
mean
generally.
We
don't
really
pay
a
lot
of
attention
to
monitoring
the
file
system,
cache
State
and
that's
really
kind
of
what
it's
mostly
going
to
be
about
just
being
cooked
The
Dumping
Ground
for
the
frosting
cash,
but.
A
Lie
to
us
yeah
exactly
and
you
know
what
else
I
just
realized
this.
So
today
the
the
system
demand
is
C
group.
So
when
I
say
the
system
demanded
C
group
but
I
mean
the
one
that's
named
at
gitlab
Runner.
A
Our
slash
get
lazy
group
has
no
limit,
so
that's
actually
going
to
make
the
first
of
cash
be
able
to
use
basically
whatever
like
it
would
act
like
a
normal
host.
Does
yeah.
B
A
Able
to
use
you
know
as
much
of
the
memory
as
it
wants
to
as
as
as
fast
and
cash
I
kind
of
like
that.
As
a
side
effect,
I
mean
that
wasn't
in
anyone's
mind,
as
we
were
kind
of
planning
the
the
hierarchy
actually
I
kind
of
thought.
It
was
a
little
bit
silly
to
have
a
slash,
giddly
layer
of
the
hierarchy
that
we
didn't.
A
You
know
I
mean
so
they
in
the
sense
that
it
does
no
harm,
and
it
does
like
it's
kind
of
helpful
for
namespacing
purposes,
but
we
don't
technically
need
it.
So
it
doesn't
serve
a
purpose
in
in
that
sense,
because
yeah
and.
A
B
A
Anyway,
I'm
just
so
I'll
stop
bothering
about
that.
So.
B
A
Main
thing
I
wanted
to
kind
of
talk
through
was
what
to
expect
in
terms
of
memory
usage
that
gets
charged
at
the
Z
groups,
because
they're
they're
gonna
get
most
of
the
frosting
cash
has
already
been
wormed
by
and
those
those
pages
were
remained
available,
but
they're
charged
to
different
c
groups.
So
what
we'll
see
in
terms
of
the
file
Sim
cash
being
charged
to
the
new
c
groups
is
really
just
the
stuff.
That's
not
already
being
warmed
that
that'll
take
some
time
to
accrue
I'm
guessing
yeah
yeah.
A
So
that's
that's
really.
It
I
think
so
so
I
guess
in
terms
of
what
we'll
see
during
the
first,
you
know,
hours
or
days
of
a
robot
I.
Think
the
growth
of
the
the
memory,
dot
usage
and
bytes
metric
is
going
to
be
slower
than
what
we
would
see
if
the
fast
and
cache
wasn't
already
warmed
for
them
and.
B
A
B
That
makes
sense
I
wasn't
aware
of
that.
So
thank
you
for
the
call
out
okay.
So
what
do
you
think,
for
example,
on
day
one,
let's
say
tomorrow
or
Monday
morning
yeah
my
time,
for
example,
we
roll
out
one
percent
on
ten
percent
yeah
with,
like
maybe
an
hour
to
make
sure
everything
works,
fine
and
then
on
Tuesday.
B
Around
the
same
time
after
24
hours
we
go
50
percent
and
then
we
leave
it
for
maybe
two
or
three
days
on
and
enable
it
hundred
percent
on
Thursday.
So
we
can
actually
get
some
time
between
deploys
and
things
like
that,
make
sure
everything
behaves
correctly
sure.
A
That
sounds
reasonable
to
me.
The
biscuit
Monday.
B
A
You
may
know
this
already,
but
I'm
going
to
be
out,
I
think
it's
next
week,
I'm
gonna
be
out
of
out
of
office.
Oh
okay,
yeah
I
was
kind
of
hoping
to
be
around
for
some
of
this,
but
I
guess.
Oh.
B
A
A
Yeah,
no
I
I
I'm
really
satisfied
to
get
this
rolled
out.
It's
I'm
I'm
planning
on
doing
that.
So,
while
I've
got
you
here,
I'm
not
going
to
spend
a
lot
of
time
on
this,
but
I
did
want
to
just
show
it
since
I've
got
it
on
screen.
It's
very.
B
A
A
So
this
is
the
this
is
so
there
were
two
things
I
wanted
to
do
for
for
kind
of
calibrating,
so
the
the
CPU
usage
I
I,
don't
I,
don't
care
about
that.
We're
using
CPU
shares.
It's.
You
know
that
that's
I
figured
we'll
set
up
we'll
set
a
generous,
a
generous
number
of
shares
for
each
C
group.
It
implicitly
supports
burst
Behavior
anyway,
even
if
we
said.
A
Question
right,
so
exactly
exactly
so,
that's
why
I'm
focusing
just
on
the
memory
yeah
perfect,
so
for
the
memory.
So
this
is
this
is
last
seven
days,
I'm
gonna
screenshot
this,
so
you
don't
have
to
like
memorize
it
or
anything.
But
I
just
want
to
talk
very
briefly
so
I'm,
looking
back
at
last
seven
days
for
all
all
yearly
nodes
in
production
that
that
have
that
that
spawned
a
command
through
the
mechanism
that
we're
going
to
be
using
secrets
on
so
each
of
those
commands
gets.
A
Are
you
statistics
captured
and
I
say
each
almost
all
of
them
get?
Are
you
statistics
I'm,
assuming
that
this
is
a
reasonably
large?
It's
not
a
complete
census,
but
it's
close.
It's
close
enough
for
our
purposes.
All
I
really
want
to
do
is
make
sure
that
we
have
that
there's
no
single
command!
That's
going
to
blow
the
budget
in
one
command!
That's
going
to
need
more
memory
than
than
we're
going
to
give
to
the
pro
repo
c
groups.
Then
we
have
to
reevaluate.
A
Happily,
this
is
giving
us
the
the
number
of
commands
witnessed
and
the
the
50th
95th
99th
percentile
and
the
actual
Max
is
just
some
bytes
or
yes,
it's
in
bytes,
okay,
so
so
this
is
and
I
got
it
broken
down
by
Shard.
But
we
can
already
see
that,
like
the
that
this
is
I.
I
will
double
check
that
it's
in
bytes,
because
RSS
is
often
reported
in
kilobytes.
But
this
has
to
be
bytes,
because
we
don't
it's
got
to
be
bytes,
but
I'll
double
check.
Did
it
release
bites?
B
A
So
this
is,
if
this
is
in
bytes,
then
it's
70,
79
megabytes
and
that
seems
impossibly
small
yeah,
because
this
isn't
just
Anonymous
memory.
This
is
this.
Is
this
includes
mapped
memory?
So
maybe
this
maybe
this
is
using
kilobytes.
It
was
on
my
list
to
check
anyway,
but
now
I'm
really
really
interested
to
check
it.
Okay,
so
I
guess
I
guess
what
I'm
really
kind
of
getting
at
is.
A
This
is
this
is
RSS,
as
reported
by
the
by
the
process,
not
not
by
the
C
group,
so
this
includes
mapped
files
which
you
know
if,
if,
if,
if
this,
if
this
process
that
used
79
gigabytes
of
Resident
memory,
what
if,
if
it
is
79
gigabytes
it
that
process
almost
certainly
wasn't
I
wasn't
using
that
much
Anonymous
memory.
Some
of
that
would
have
been
file
backed
and
so
at
that
point,
I'd
want
to
I'd
want
to
go.
Look
at.
A
You
know
that
example
command
to
see
what
what
what
was
doing,
what
was
going
on
there.
So
this
is
the
main
thing
I
wanted
to
get
is
what
what
are?
What
are
the
outliers
and
it
looks
like
it
looks
like
we
don't
have
any
in
the
last
seven
days,
any
any
worrying
outlies,
except
for
possibly
this.
You
know
above
the
99th
percentile
in
the
in
default
Shard.
So.
A
Right,
yes,
correct
and
and
if
I'm
going
to
make
some
numbers
up
here,
if,
if
it
really
used
say
you
know,
10
gigabytes
of
anonymous
memory
and
the
rest
of
it
was,
and
the
and
the
remaining
70
gigabytes
was
was
file
back
to
memory
living
inside
a
c
group
that
you
know
had
something
less
than
80
gigs
budget
would
work
perfectly
fine,
because
the
because
the
kernel
would
just
evict
the
the
file
backed
Pages
and
and
pull
them
back
in
as
needed.
A
It
doesn't
have
to
keep
the
entire
thing
resident.
So
so
that's
I
I
think
that's
that's
kind
of
what
I.
What
I
wanted
to
talk
through
I'm
gonna
go
find
out
what
specific
repo
this
one
was
on
and
I'll
probably
like
go
back
to
the
the
logs
and
find
anything
that
was
over.
A
You
know
over
if,
if
assuming
the
units
or
Kill
the
Lights
here,
anything
that
was
over
20,
you
know
10
or
20
just
to
take
a
look
at
those
those
those
commands
and
treat
them
as
hotliers
and
see.
If
see,
if
there's
any
kind
of
pattern
that
we
need
to
be
aware
of
here.
But
that's
that's
what
I'm
planning
on
to
do
for
the
outlier
analysis,
and
this
is
what
I
wanted
to
kind
of
Reserve.
You
know,
but
maybe
half
an
hour
of
focus
time
for.
B
A
For
sure,
so
we
so
I
looked
at
the
Epic
and
just
looked
like
the
issue.
I
knew
we
had
an
issue
for
it,
so
I
went
and
looked
for
it
and
I
figured.
That
was
a
good
place
to
quit.
Put
scratch
notes
and
I'll
I'll
add
a
summary
note
to
the
to
the
the
main
epic,
perfect,
yep
yeah,
so
that's
kind
of
okay.
Let's
start
off
the
presses
I
want
to
just
here.
B
So
speaking
of
the
Epic,
since
we're
changing
a
little
bit,
the
strategy
I
was
thinking,
maybe
of
closing,
like
let
me
share
my
screen
quickly.
Yeah.
B
Yeah,
like
the
canary
one,
we
can
close
this.
We
can
probably
close
this
or.
B
Close
it
yeah
yeah
and
then
have
this
one
and
the
yeah
the
change
management
issue.
Instead
of
the
change
management
that
she
wrote
since
we're
doing.
A
A
B
And
then
yeah,
that's
it
one
thing
I
wanted
to
talk
to
you
about
is
the
prerequisites
that
you
talked
to
me
about
slack.
Yes,.
B
So
I
was
I
started,
looking
a
little
bit
at
this
before.
Okay,
our
call
it
is
strange
because
C
advisor
is
enabled
on
the
canary
node
and
Prometheus
is
configured
to
scrape
it.
So
I'm
not
sure.
Oh.
B
B
A
B
B
You
up
late
yeah,
oh
that
should
include
maybe
oh.
B
A
A
Hoping
to
have
that
ready
for
your
review,
okay,
fingers
crossed.
A
A
Yeah
I
was
hoping
that
you
would
not
mind
doing
that.
Yeah.
A
B
Yeah
I
I
saw
the
incident
Channel.
It
doesn't.
A
Oh
my
gosh,
this
is
this
is
just
like
a
collection
like
I've
got
sure.
I've
still
got
this
opening
the
window.
My
gosh
I
have
so
many
of
these.
B
A
B
A
A
A
Yeah,
like
the
I,
wanted
to
split
split
the
row,
so
we
got
one
instead
of
just
one.
C
group
through
I,
wanted
to
have
one
for
CPU
and
one
for
memory,
mainly.
A
Wanted
to
you
know,
make
the
context
clear
and
I
wanted
to
add
some
more
so
so
similar
panels,
I'm
switching
the
the
units
to
to
seconds
so
that's
to
match
the
other
panels
on
the
same
dashboard.
A
That
makes
sense
yeah
these
are
these
are
just
the
way
you
set
these
up
made.
It
really
really
easy
to
modify.
It
was
absolutely
fantastic
and
I
wanted
to
add
so
I
copied
your
existing
kills
per
node
into
this.
This
memory
Secrets
row
because
that's
it'll
be
directly
relevant
for
fair.
B
Enough
I
added
it
to
the
note
performance,
one.
A
B
It
doesn't
make
it
doesn't,
hurt
anything
any
performance,
so
yeah,
it's
fine,
yeah,
awesome.
A
And
the
other
thing
I
wanted
to
add
is:
is
the
this
will
take
very
long
but
I'm
going
to
add
to
the
to
the
memory
c
groups
section
the
the
drives
me
nuts,
that
they
need
it
this
way,
but
the
the
c
groups
RSS?
A
Yes,
exactly
and
I'm
I'm-
probably
going
to
call
it
like.
You
know:
Anonymous
memory,
usage
and
parenthetically
coal
c
groups
RSS,
because
it
will
people
will
misinterpret
it
because,
like
everyone,
has
this
preconceived
notion
of
what
RSS
means
based
on
like
you
know
the
last
30
years
and
having
c
groups
use
the
same
word
for
something
completely
different
is
just
oh
anyway,.
B
A
B
And
when
are
you
on
PTO?
If
you
don't
mind
me.
A
I
think
it's
I
should
know
this
say
bye.
B
A
I'm
totally
wrong
about
that.
It
is
starting
on
September
26th,
it's
not
next
week,
it's
the
week
after
next,
so
I'll
totally
be
here
for
this
okay.
B
Then
but
then
I
do
want
to
potentially
write
some
rum
books
as
well
for
the
on-called
people,
even
just
learning
about
tea
groups
or
looking
at
like
the
RS
systems
and
things
like
that,
even
showing
them
the
dashboards
would
be
useful.
B
So
I
I'll
take
that
on
as
well.
Okay,
but
yeah
I
think
we
have
a
clear
plan
right,
yeah.
A
I
think
so
too.
Yes,
I,
think
that
was
everything
I
wanted
to
chat
about,
and-
and
are
you
comfortable
with
this.
B
Yes,
thank
you
you're
comfortable
with
this.
So
to
reiterate,
you'll
open
up
the
shift,
merge,
request,
I'll,
look
at
the
canary
scraping
issues,
you'll
also
push
out
the
dashboard
updates
and
I
will
fix
the
relabeling
issue
that
we
have
like
roll
down
tomorrow
and
then
on
Monday
morning.
My
time
I
can
start
with
the
one
percent
and
10
percent
rollout
and
leave
it
for
a
day
and
then
move
on
to
50
and
then
yeah
leave
us
for
two
days
and
move
to
100
and
hopefully
nothing
blows
up.