►
From YouTube: Scalability Team Demo 2022-01-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
today's
scalability
team
demo.
I
have
the
first
demo
on
the
agenda
and
well.
This
is
not
something
that
I
can
completely
demo
because
it
takes
around
40
minutes,
but
I
wanted
to
talk
about
what
we
did
with
the
packer
images,
which
is
what
we
are
using
as
a
base
to
or
always
upgrade
process,
specifically
the
gitaly
notes,
and
so
what
we
did
is
that
we
built
this
process
into
ci
to
make
it
more
repeatable,
more
reliable.
A
And
so,
if
you
go
to
the
conflict
management
project
now
on
ops
and
go
to
the
schedule
pipeline,
I
created
a
pipeline
for
staging.
Actually
what
I
could
demo
is
creating
the
the
one
for
production
which
I
can
do
now,
and
this
is
documented
in
in
the
packet
readme.
Now
that
what
you
do
is
that
you
create
a
pipeline-
and
you
put
this-
these
are
not
secrets.
This
is
so
good
to
share,
and
then
you
establish
well,
you
need
these
build
packer
images
pipeline.
A
This
is
just
a
condition
so
that
particularly
the
job
and
then
you
populate
certain
stuff.
The
most
important
one
is
which
omnibus
package
version
you
want
to
build,
so
you
specify
a
full
version
string
and
then
trigger
this
this
pipeline,
and
then
you
get
a
job
that
is
going
to
build
a
pipeline
with
that
omnibus
package
version
and
run
chef,
client,
and
do
all
of
that.
I
can
actually
show
the
last
pipeline,
which
would
have
been
the
one
that
we
use
for
the
staging
upgrade
rehearsal.
A
A
B
C
A
B
B
Yeah
you
just
trimmed
stuff
until
it
didn't.
There
was
nothing
left
to
trim.
C
B
A
I
did
watching
the
logs
while
I
was
building
this
spare
plane
and
it
seemed
to
spend
a
lot
of
time
in
the
unattended
upgrades
that,
like
you,
will
get
that.
B
A
B
A
B
A
Yeah,
this
is
precisely
why
why
we
did
it
this
way
this
is
yeah,
so
the
this
was
what.
A
I
was
gonna
say,
there's
something
a
bit
happy,
which
is
that
you
kind
of
have
to
predict
how
many
times
it
reboots
during
the
process,
because
since
you're
not
using
a
booster
script.
This
is
something
that
that
you
got
so
that
you
you
have
to
manually
then
do
the
bootstrap
and
we
use
expect
disconnect
so
that
we
allow
those
shutdowns
to
occur.
B
So
is
this
I've
never
used
ci
cron
jobs.
Is
this
a
is
this
a
hack
so
that
you
can
set
an
environment,
so
you
can
set
ci
variables?
Yes,.
A
Yeah,
I'm
basically
using
this
as
a
safe
template
or
something
like
that.
So
yeah,
basically
like
yeah,
I'm
using
this
yeah
to
hold
these
values,
because
these
are
always
the
same
changes
so
yeah.
B
Because
because
you
don't
want
to
commit
them
into
the
repo,
you
don't
want
to
go
into
the
project
settings
to
set
them,
because
maybe
not
everybody
has
that
access.
A
Yeah,
I
was
saying
I
did
saying
like:
is
this
a
feature
that
that
gitlab
should
have
where
you
have
a
set
of
variables
and
then
another
set
of
variables?
I
mean,
I
guess,
an
inactive
schedule.
Job
does
that
trick,
but.
B
So
this
would
be
on
a
day
where
we
do
upgrades
like
you,
first
tell
delivery
or
your
ass
delivery.
Is
it
okay
to
go
ahead
and
do
this
now
and
then
you
wait
40
minutes
to
for
this
to
run.
Then
you
do
the
record
test.
You
run
the
thing
on
packer
test,
so
you
validate
that.
A
Yeah
you
yeah
the
setup
steps
takes
about
an
hour
yeah.
We
also
I
I
added
an
extra
step
to
also
get
snapshots
from
the
current
boot
disk,
which
I
don't
think
we'll
use,
but
just
to
be
extra
sure.
A
Yeah
exactly
so
so
yeah
you
do
that,
but
the
actual
upgrade
process
takes
five
minutes
per
service
or
something
like
that.
B
A
I
guess
I
can
mention
like
one
thing
that
I
I
wasn't
sure
is
so
I
did
think
okay,
I
should
probably
we
should
probably
snapshot
the
current
buddhists,
but
I
do
think
that
if
we
were
to
roll
back-
and
I
want
to
next
time-
we
do
a
staging,
because
we
did
the
first
stage
in
batch,
so
we're
still
missing
the
second
for
the
second
batch.
A
I
should
we
should
probably
test
the
robot
procedure,
but
I
actually
think
that
we
should
consider
as
a
first
option,
not
trying
to
use
that
image
that
we
that
we
take
from
the
preset.
Because
if
you
go
that
route,
you
have
to
create
a
new
disk
from
the
snapshot
and
then
swap
the
disk
attached
to
the
machine,
I
think,
is
if
we
are
going
to
roll
back,
it's
easy
to
reveal
using
the
ubuntu
base
image.
B
And
the
existing
boot
bootstrap
script
yeah.
A
And
then
there
you
will
take
like
15
minutes
according
to
what
you
tested
before,
but
still
take
the
the
bootstrap
disk
snapshot
at
the
beginning.
Just
if
in
case
that
doesn't
work,
then
you
have
a
plan
b.
A
Well,
that's
also
something
another
reason
why
I
thought
to
do
the
snapshot.
Is
that
because
we
have
had,
in
the
past
little
changes
that
were
done
directly
on
the
machine
that
are
not
reflected
on
chef
and
that
it
has
happened,
that
we
lost
a
machine
and
we
had
to
rebuild
it
and
then
it
didn't
work,
because
it
turns
out
that
they
were
stuff
like
that.
We
did
outside
of
chair
configuration,
changes
or
something,
and
that's
not
ideal,
and
that
shouldn't
happen,
but
in
case
it
does
happen.
Yes,
we
should
have.
A
Do
the
other
batch
in
the
other
servicing
saying
so
that
we
know
that
the
process
is
is
good
which
the
first
batch
was
was
pretty
good?
Everything
went
now
as
expected,
probably
in
that
second
batch,
as
I
said,
do
a
row
back
just
to
validate
that
that
also
works,
and
I
think
then
we
are
set
to
do
production,
which
seems
it's
going
to
be
in
march.
A
B
B
C
All
right,
I
guess
I'll
I'll,
take
the
next
one,
so
I
wanna
show
the
the
current
state
of
redis
hybrid
deployment.
C
So
we
have
an
easy
way
forward,
but
also
an
easy
way
back,
and
then
we
can
leave
them
running
as
replicas
first
and
sort
of
keep
them
with
a
low
or
even
zero
replica
priority
so
that
we
don't
fail
over
to
them
and
kind
of
go
step
by
step.
So
a
lot
of
work
that
scarbeck
has
been
doing
and
a
lot
of
what
I'm
going
to
be
doing.
C
Today's
stuff
scarbeck
has
been
working
on
is
around
making
this
procedure
more
solid
and
more
existent,
because
the
the
helm,
charts
didn't
have
a
story
for
this
at
all,
and
so
we're
kind
of
inventing
this
as
we're
going
so
one
sec.
I
just
remembered
one
setting.
I
want
to
remove
okay,
so
I'm
gonna
share
my
screen.
C
So
the
starting
point
is
we
have
a
vm
based
cluster
with
two
vms
in
it
only
so
vm
letter,
c2,
vm0
and
vm1
vm0
is
a
replica.
C
Vm1
is
the
primary,
and
so
now
I
want
to
add
some
redis
pods
into
this
deployment,
and
so
I'm
going
to
go
ahead
and
install
helm
release
this
installs
a
custom,
patched
version
of
the
helm
chart
we're
working
with
upstream
on
bringing
these
changes
into
the
upstream
helm.
Chart
we've
already
contributed
a
lot
of
stuff.
A
lot
of
stuff
has
already
landed,
but
there's
also
still
some
stuff
in
flight.
C
C
On
the
right
hand,
side
we
have
the
vms,
so
the
the
top
two
ones
are
redis
and
the
bottom
two
ones
are
sentinel,
and
so
one
of
the
things
that
you
might
see
here
is
that
on
initial
boot,
the
sentinel
process
is
crashing
because
it
we're
currently
relying
on
external
dns,
and
it
takes
some
time
for
that.
Dns
record
to
propagate,
and
so
sentinel
is
trying
to
connect
to
the
the
pod
local
redis
and
it's
not
able
to
resolve
that
ip
address.
C
Yet
so
it
backs
off,
and
it
takes
about
it's
about
20
seconds
for
this
to
properly
resolve
and
for
the
back
off
to
then
eventually
succeed,
and
so
just
to
get
another
view
into
this.
We
have
the
pod
here
and
any
any
minute
now.
C
Correct
yes,
so
if
I
go
inside
of
this
pod,
we
can
see
the
redis
container
is
working
fine,
but
the
sentinel
container
isn't
crashly
back
off
and
is
taking
a
bit
of
time
to
to
bootstrap
at
least
so.
Okay,
so
now
it
worked,
sentinel
managed
to
connect
and
one
of
the
configuration
settings
that
I've
put
into
the
helm
config
is
a
new
external
external
master
setting.
So
we
point
this
to
actually
any
one
of
the
vms.
C
We
can
see,
let's.
C
Yes,
so
node
zero
connected
node
one
connected,
so
we
already
have
two
replicas
that
connected
to
the
vm
right
and
just
to
double
check
that
I
can
also
look
at
the
role
for
vm1
and
indeed
we
have
the
other
vm
as
a
replica
and
then
these
two
pods,
and
so,
if
I
add,
if
I
recheck
now
again,
let's
see
do
we
have
the
we
don't
have
the
other
part.
Yet.
C
Yeah
yeah
there's
definitely
still
some
rough
edges
here.
So
this
is.
This
is
not
super
solid
yet,
but
I
think
a
lot
of
the
basic
ideas
already
there.
So
yes,
so
node
two
is
coming
up
and
if
I
ask
again
we
should
have
four
replicas
and
we
do
so
that's
sort
of
the
the
basics
of
joining
the
clusters
together
without
having
to
run
manual
replica
of
commands.
So
it's
all
baked
into
the
hound
chart.
C
And
there's
I've
still
seen
some
weird
behaviors
during
failovers,
where
it
takes
some
time
to
converge.
So
there's
there's
definitely
still
some
weird
behaviors
that
we
need
to
understand
a
little
bit
better.
C
C
Pointing
the
kubernetes
cluster
at
a
non-primary
and
so
it
sort
of
takes
a
while
to
figure
out.
Oh
this
wasn't
actually
the
primary.
So
let
me
reconfigure
sentinel,
because
I
now
know
that
this
other
node
is
supposed
to
be
the
primary
yeah
and
I
think,
there's
going
to
be
some
subtle
things
that
we
need
to
tweak
like
the
sentinel
quorum
because
we're
adding
more
sentinels
into
the
mix
but
yeah.
C
That's
that's
kind
of
the
basic
idea,
and
it's
already
looking
pretty
promising
that
we
can
have
this
kind
of
hybrid
setup
and
well,
I
guess
maybe
I'm
feeling
lucky:
let's
try
a
failover
we'll
see
if
it
works
or
not
definitely
seen
some
weird
stuff
during
failovers
before,
but
let's
see
see
what
happens
so
who
who
got
promoted.
C
B
D
C
Exactly
that
that
was
the
setting
that
I
was
changing
right
before
I
started
so
yeah,
and-
and
so
this
is
actually
behavior-
that
I've
seen
even
without
a
vm
based
setup,
so
even
with
just
rep
just
kubernetes
parts.
C
C
C
So
note
we
were
trying
to
fail
over
to
node
two
node
two
was
trying
to
connect
to
itself,
and
then
we
failed
over
to
node
0,
which
we
can
see
here
and
it
looked
like
that
actually
succeeded
and
it
looks
like
it
recovered
here
as
well.
So
this
is
sentinel
at
some
point
noticing
that
something
is
off
and
performing.
This
fix
slave
config,
where
it
reconfigures
in
this
case
node
two
and
says
by
the
way
we've
got
a
new
primary
that
was
elected.
C
So
please
connect
to
that
one,
and
so
now,
node
two
is
connecting
to
that
one
and.
C
A
C
No,
that
was
it,
it's
also
a
pod,
so.
C
I
don't
know
why
this
connecting
to
itself
thing
is
happening.
I
think
there
is
some
bug
somewhere
that
is
leading
to
this
situation,
and
we
still
need
to
figure
out
what
is
driving
that.
B
I
wonder
if
it's,
I
wonder:
if
working
with
host
names
is
wonky,
because
sentinel
was
originally
designed
to
work
with
ip
addresses
and
the
support
for
host
names
got
bolted
on
later,
and
maybe
there
are
some
funny
things
you
have
to
do
or
not
do
when
you're
using
hostnames.
C
Yeah,
I
I
wouldn't
be
surprised
if
that
were
a
factor.
Definitely
yep.
That's
that's
pretty
much
all
I
had.
That
was
great
thanks
for
sharing
that.
Thank
you
all
right,
yeah
cop.
B
Oh
sorry,
I
turned
a
lamp
on
yeah.
I
put
other
reddish
on
the
agenda
because
there
is
an
open
incident
that
well
half
of
the
people
in
this
meeting
have
been
looking
at
at
least.
B
Because
he's
on
call-
and
I
because
I
stumbled
into
it-
but
bob
has
also
been
looking
at
it-
and
sean
has
been
looking
at
it.
So
we've
been
a
lot
of
people
have
been
looking
at
it
it.
The
incident
is
at
6
320.
I
think
so.
It's
about
the
git
service,
that's
the
it
for
the
git
surface
has
a
tight,
aptx
slo
and
it's
sort
of
flapping
in
and
out
of
its
aptx
slo
and
the.
B
B
So
it's
actually
a
redis
cash
problem,
and
it's
so
we
have
latency
spikes
in
redis
cash
and
they
are
surfacing
in
the
git
service,
because
that
has
a
fairly
relatively
ambitious
latency
slo,
and
I
guess
that
is
because
the
git
service
has
homogeneous
traffic
compared
to
the
rest
of
the
site
like
it's.
Usually
just
the
user
comes
in.
We
look
up
the
user.
B
We
look
up
the
projects
we
say
yes
or
no,
so
the
the
rails
requests
are
very,
are
almost
always
the
same
thing,
so
it's
easier
to
set
a
tight
slo
on
it,
and
there
are
two
issues
on
our
issue
tracker
that
andrew
created
based
off
of
dam
land,
where
redis
cache
looks
funny.
B
So
maybe
it's
the
memory
thing,
so
the
cpu
spikes
are
happening
because
of
jeff
the.
If
you
look
at
let's
see
if
I
have
this
still
open
somewhere,
sorry,
I
didn't
prep
this
that's.
B
I
I
don't
know
if
we
can
ignore
them.
I
think
they're
still
interesting,
because
what's
happening
is
that
the
sentinels
do
they
correlate.
B
No,
so
we
can
ignore
them
from
that
point
of
view,
but
they
are
going
to
keep
showing
up
in
tamland
and
it
means
we
have
broken
chef
slightly
broken
chef,
cookbooks
or
chef
config.
So
it's
something
we
should
fix
what's
happening.
There
is
that
chef
keeps
uninstalling
and
reinstalling
os
queryd
on
each
run
and
the
sentinel
nodes
are
single
core
vms.
B
So
if
you
run
chefclient
and
start
installing
software,
then
it's
very
easy
to
saturate
the
cpu
for
a
long
time
on
those
nodes
like
they
can
do
a
lot
of
stuff.
A
chevron
is
relatively
a
big
job
for
these
vms
and
the
chevrons
do
too
much
work
and
because
the
saturation
graphs
of
the
sentinels
get
bundled
up
into
the
saturation
graphs
of
the
redis
cache
surface.
B
We
get
cpu
the
the
cpu
graph,
some
of
the
cpu
saturation
graphs
look
bad,
but
if
you
drill
down
on
where
it's
coming
from
it's
coming
from
the
sentinels,
but
that
doesn't
cost
the
latency
and
it
would
have
been
funny
if
it
did
because
there's
a
reason
we
can
get
away
with
running
the
sentence
on
single
core
vms
they're,
the
control
plane.
We
shouldn't
be
talking
to
sentinel
all
the
time.
C
C
B
Yeah,
that
is
probably
the
most.
We
should
also
fix
the
chef
thing,
but
something
like
that
is
going
to
happen
again.
Yeah.
B
Well,
it's
one
of
the
things
I
wanted
to
talk
about
because
it
shows
up
in
tam
land
and
it's
one
of
the
things
we
do
as
credibility
is
to
try
and
understand
funny
things
in
tamlan.
So-
and
I
just
realized
this
today,
because
I
with
alejandro's
help,
I
just
turned
off
the
chef
client
and
let
it
be
off
for
for
two
hours
and
then
it
became
clear
that
those
cpu
spikes
went
away.
B
But
those
don't
explain
the
latency.
The
latency
is
the
real
problem,
and
the
next
thing
I
think
I
would
be
looking
at
is
the
eviction
behavior,
because
bob
pointed
this
out,
because
we're
going
to
have
periodic
be
beating
eviction
behavior
on
this
on
redis
cache,
because
it's
it's
an
lru
like
we
just
let
it
fill
up
and
then
we
start.
B
We
ask
redis
to
throw
things
out,
but
this
is
I
I
haven't
even
looked
at
what
this
would
mean
and
how
to
verify
this
or
what
we
can
do
about
it.
But
maybe
the
eviction
is
somehow
the
eviction
activity
is
so
much
it's
too
much
work
and
it's
causing
latency,
which,
when
I
say
it
out
loud,
that
seems
very
natural,
because
garbage
collection
causing
latency
is
a
common
problem
and
it's
a
form
of
garbage
collection,
yeah.
D
We've
definitely
seen
that
pattern
of
behavior
in
in
some
past
incidents-
and
this
is
this-
is
reminding
me
of
igor.
You
may
remember
this.
We
have
definitely
seen
events
where,
where
microbursts
of
of
eviction
events
and
garbage
collection
can
potentially
be
driven
by
by.
D
Let's
see
client
buffers
count
against
the
memory
budget,
so,
depending
on
how
we've
got
red
is
configured
that
can
potentially
drive
eviction
events
as
well.
So
if
a
client.
B
D
Large
response
or
many
clients
connect
and
need
buffers
allocated.
This
can
actually
drive
microbursts
of
eviction,
events
which,
which
kind
of
self-amplify
the
yeah.
B
It
could
also
be
that
something
in
the
application
is
storing
unwisely
large.
D
B
Blobs
in
redis-
and
if
you
do
that,
then
you
also
change
the.
D
D
B
D
Absolutely-
and
there
are
some
distinctive
patterns
that
we
can,
that
we
can
identify
in
perf
profiles
on
flamescope
is
really
useful
for
finding
if
those
events
are
happening
on
on
a
small
time
scale.
So
I
would
suggest
that
as
a
next
step,
just
I
I
have
I'm
just
talking
about
some
past
analysis
that
we've
done.
I
have
not
looked
at
this
particular
incident.
I've
been
diligently
not
looking
at
this
incident
in
an
effort
to
to
do
project
work
and
you're
dangling
a
carrot
in
front
of
me.
So.
B
Well,
I
I
I
I
don't
mean
to
I,
I
I'm
not
out
of
ideas
yet
so,
if,
if
you
don't,
I
I'm
not
asking
you
to
drop
your
project
work,
let
me
put
it
that
way,
because.
B
Is
the
the
periodic
scan
that
craig
set
up
where
we
look
at
the
big
keys?
Because
it
might
be
that
if,
if
a
big
key
is
to
blame
that
it
shows
up
there?
And
then
that
could
lead
to
a
clue.
But.
D
Yes,
I
would
suggest,
as
as
as
maybe
not
your
very
next
step,
but
as
as
something
to
do
in
in
in
the
near
term
capture.
If,
if
is
there
a
previous
periodicity
to
this,
to
this.
B
Kind
of
but
it's
not
it's
not
super
regular.
So
when
I
consider
just
I
mean
I
could
just
set
up
a
very
long
perf
and
try
to
catch
it.
That
way.
A
C
D
B
When
we
were
looking
at
these
this
one
misbehaving,
italy
server,
I
did
some
stuff
where
I
made
like
a
triggered
perf.
So
I
I
could.
I
could
come
up
with
a
trigger
and
it's
not
going
to
be
the
same
trigger
as
it
was
on
that
gitly
server.
But
if,
if.
D
If
it's,
if
it's,
what,
if
it's,
what
I
suspect,
then
I
don't
think
it's
possible
to
get
reactive
enough
to
see
the
the
lead
up
is
the
most
interesting
thing
in
these.
Okay,
as
I
recall,
I
think
these
microbursts
were
on
a
time
scale
of
like
hundreds
to
a
few
thousand
milliseconds,
so
it'd
be
hard
to
be
reactive
enough
to
catch
that
if
you're
not.
B
B
D
If,
if
another
possibility
is
to
just
run,
you
know
run
run
a
10
minute
record
at
99,
hertz
and
yeah.
B
D
Have
a
discrete
signature,
but
it
should
give
us
some
clues,
so
I
think
that
I
think
either
those
would
be
reasonable,
reasonable
next
steps
to
kind
of
to
qualitatively
find
what
kind
of
what
kind
of
events
we're
looking
for.
So.
B
What
did
you
do
about
that
last
time,
or
what
did
we
do
about
it?
Last
time.
D
C
I
remember
one
of
the
things
which
was
okay
tuning,
the
timeout
duration
for
when
the
redder
server
disconnects
idle
clients.
Because
one
of
the
correlations
that
we
saw
back
then
was
that
this
was
also
co-insiding
and
not
quite
clear
which
of
the
two
is
driving,
which
other
one
with
bursts
in
reconnects
or.
B
D
D
Yeah
it
was
it
was.
It
was
literally
that
we
had
a
large
number
of
clients
that
had
that
had
relatively
infrequent
need
to
use
those
connections,
so
the
connections
would
eventually
reach
their
idle
timeout
and
then.
B
So
redis
was
kicking
them
out
and
but
then
why
do
they
all
come
back?
At
the
same
time,.
B
C
B
Yeah
but
then,
if
they
come
in
a
burst,
that
means
that
on
the
client
side,
there's
some
coordinated,
behavior
or
some
beating
behavior
that
causes
all
these
clients
to
connect.
At
the
same
time,.
B
C
And
we
were
kind
of
banking
on
that
improving
things
as
it
so
happens.
Reader
6.2
was
just
merged
into
omnibus,
but
doing
those
upgrades
takes
a
long
time.
So
I
I
can't
make
any
promises
about
when
I
will
work.
B
D
And
again,
I
think
I
think,
there's
there's
a
there's,
a
very
reasonable
expectation
that,
like
what
one
of
your
earlier
guesses
jacob
was.
D
A
large
exactly
large
keys.
I
know
that
we've
had
instances
of
that
in
the
past.
It's
certainly
it's.
Certainly
a
plausible
explanation
for
having
eviction
bursts.
Have
you
have
you
seen?
Have
you
seen
metrics
indicating
that
we
that
we
do
have
spikes
in
eviction
rates
during
these
latency
spikes.
B
That's
that's
still
speculative.
There
is
it's
certainly
plausible.
I
was
just
yeah.
Well
I
mean
I
I've
seen
some
graphs,
but
I
haven't
managed
to
line
them
up
yet
with
with
the
latency
spikes,
so
I'm
not
super
sure
but
they're,
okay
yeah.
I
I
just
need
to
line
up
more
graphs.
First
yeah.
D
That's
fine.
I
yeah.
I
would
definitely
lean
towards
getting
getting
getting
at
least
a
few
minutes
worth
of
data
into
flamescope.
So
we
can
see
what
the
cpu
usage
behavior
is
on
during
different
phases
of
one
of
these
events
and.
D
B
Yeah,
the
funny
thing
is
that
I
got
this
issue
assigned
to
me
anyway
to
look
at
this
problem,
and
now
it's
okay.
It's.
B
We
already
have
one,
let
me
okay,
let
me.
D
B
I
I
from
memory
I
think,
you're
also
doing
other
impactful
project
work
like
if
I
I
really
hope
that
I
wouldn't
want
this
to
take
away
from
the
perf
profiles
for
the
gitly
team.
I
think
that's
also
very
impactful.
B
I
am
yeah.
B
I'm
confused
now:
where
is
this
issue,
I'm
not
showing
anything
confidential,
that's
going
into
the
recording?
Oh,
it's
not
even
on
this
tracker
in
some
timeline
tracker
one!
Second,
let
me.
B
B
So
we
have
one
for
these
scat
set
waiting
saturation,
but
I
think
that's
the
chef
runs
on
the
single
core,
pretty
sure
because
it
stopped
when
chef
was
off,
but
then
there's
another
one
for
memory,
memory,
utilization
and
yeah.
This
is
very
micro
because
we're
looking
across
months-
but
maybe
there
is-
I
know
it
seems
like
a
long
shot
to
say
that
this
macro
behavior
is
also
just
to
do
with
micro
behavior
with
the
fictions,
but
I
I
was
asked
to
look
at
this
because
of
that.
B
B
Thanks
for
letting
me
talk
about
this
and
for
sharing
what
what
you
learned
in
the
past,
I
think
I
can
hand
over
to
the
next
person
who,
I
think,
is
you
matt.
Yes,.
D
Okay,
so
this
is,
this
is
demoing
yes,
so
this
is.
This
is
about.
This
is
about
really
broken.
This
is
really
about
broken
film
graphs
and
I've
mostly
been
talking
about
this
in
the
context
of
of
giddily,
but
it's
really
a
general
problem
for
go
binaries,
so
I
think
I'm
gonna,
I'm
gonna
most.
D
I'm
gonna
talk
about
this,
mostly
for
folks
that
might
be
watching
the
recording,
because
I
think
all
of
you
are
already
aware
of
this
problem,
so
so
most
most
binaries
that
are
not
go.
Binaries,
like
bin
ls,
for
example,
has
has
a
new
build
id
which
you
can
see
by
running
file.
If
you
look
at
a
go
binary,
for
example,
if
we
pop
over
to
let's
go
to
a
production
box,
so
if
we
go
over
to
a
italy
node
today,
we
now
delightfully.
D
Have
sorry
I
can't
type
as
fast
as
I
can
talk.
We
now
have
not
only
this
I'll
just
show
it
anymore.
That's
fine!.
D
Thank
you,
I'm
sorry!
Yes,
I
am
I'm
so
in
the
habit
of
sharing
my
whole
desktop.
I
forgot
that
I
just
did
the
the
screen
there:
okay,
so
ben
ls.
Sorry,
let
me
recap
briefly.
So
bin
ls
is
an
example
of
of
a
c
binary.
It's
got
a
new
build
id
here.
Our
giddily
binaries
now
as
of
a
few
days
ago
now
also
have
a
build
id
which
we
can
see
here.
D
They
already
had
a
go,
build
id
and
now
they
have
a
new
id,
and
this
means
that
this
new
bill
id
is
used
as
a
cache
key
for
it's
used
in
in
two
ways
by
by
perf,
which
is
one
of
the
common
tools
we
use
for
profiling.
It's
also
used
by
other
new
tools,
but
I'm
going
to
focus
on
perf,
because
that's
our
main
use
case.
D
So
so
this
build
the
new
build
id,
unlike
the
goblet
id
can
be
used
by
canoe
tools
like
perf,
for
for
several
important
things,
two
of
which
really
matter
to
us.
One
of
them
is
when
we
run
a
perf
profile
like
I
wonder
if
I
actually
have
an
example,
I
can
look
at
here.
D
Oh
yeah
that'll
work
that'll,
that's
perfect,
so
so
here's
a
perf
data
file
from
a
few
days
ago
and
if
we
ask
perf.
D
To
show
us,
oh
yeah,
only
root
can
read
that,
so
this
command
is
asking
I'm
sorry.
This
is
at
the
bottom
of
my
screen.
So
it's
probably
a
little
hard
to
see
this
is
asking
perf
to
look
in
that
to
look
in
that
that
profile,
that
perf.data
file
and
show
a
list
of
all
of
the
binaries
that
it
captured
during
that
run.
This
includes
shared
object.
Files
like
this,
as
well
as
as
well
as
executable
binaries,
like
c
advisor
noaa's,
query,
and
things
like
that,
and
you
can
see
that
it's
got
two
columns.
D
One
of
them
is
the
the
path
that
that
the
binary
was
that
the
binary
had
at
the
time
of
the
capture,
and
the
other
is
the
cache
key
which
in
this
case,
is
a
new
build
id
and
then
for
a
lot
of
these
binaries.
We
don't
have
a
value
that
it's
it's
null
and
giddily
is
obviously
an
important
member
of
that.
So
what
we've
done
here
and
the
the
reason
gitly
doesn't
have
a
value
here
is
because
it
didn't
have
a
new
build
id
at
this
time.
D
So
what
we've
solved
here
is
by
adding
a
good
new
build
id
now
now
this
profiling
data
can
discretely
identify
this
binary
rather
than
saying
well,
whatever
binary
happens
to
be
at
that
path.
Now
was
probably
the
same
one
that
was
there
when
this
profile
was
captured,
let's
assume
that
it
is
and
use
it
for
for
symbols.
So
here's
the
problem,
switching
back
to
the
to
the
the.
What
I
wanted
to
show
here
was
a
pair
of
flame
graphs,
so
this
is.
D
This
is
two
two
flame
graphs
generated
from
exactly
the
same
perf
data
file.
In
fact,
it
is
this
perf
data
file
that
we're
looking
at
right
here.
This
this
flame
graph
was
generated
with
the
with
the
original
binary
by
running
perfscript
seconds
after
the
the
perfect
chord
was
joined
and
this
flame
graph
consumed
exactly
the
same.
The
same
data,
the
same
data
file,
the
same
perf.data
file,
but
it
was
done
a
few
days
a
few
days
later
after
another
good
lead
was
deployed.
D
Now
we
deployed
italy
as,
as
you
all
know,
we
deployed
italy
fairly,
often
so
in
in
this
case.
Let
me
take
away
the
in
this
case.
Every
time
we
build
gidly
there's
you
know,
if
there's
any
source
code
changes
at
all,
it's
very
likely
that
the
symbol
table
is
going
to
have
different
mappings
for
what
what
address
is
the
starting
address
for
each
function,
and
this
means
that
that
inferring,
which
function
when,
when
you're
doing
when
you're
doing
a
unwinding
of
a
stack.
D
B
And
one
of
the
funny
things
here
is
that
if
you
cannot
unwind
the
stack
frame,
you
just
get
an
unknown,
but
for
whatever
reason,
perf
is
finding
things
but
they're
all
wrong.
B
So
it
it
both
looks
like
because
if
you
look
at
friend
graphs,
sometimes
you
have
a
flame
graph
here,
there's
a
big
tower
of
unknown
stacks
and
you
don't
know
it's
very
clear
that
you
don't
have
the
data.
But
here
you
can
sometimes
it's
really
close
to
the
same
same
flame
graph.
But
it's
not
because
it's
just
jumbled
the
the
table
from
another
binary.
D
Exactly
exactly
and
the
main
difference,
that's
exactly
right,
so
I
wanted
to
demo
the
in
this
case.
I
wanted
to
demo
the
scenario
where
you
get
incorrect.
Incorrect
function,
names
incorrect
symbol
resolution
because
it's
it's,
I
think,
by
far
the
most
insidious
case,
because,
if
someone's
working
on
working
on
doing
an
analysis
and
they
get
wrong
data
like
what
we've
got
here,
all
of
the
highlighted
frames
here
are
wrong.
D
By
the
way
I
found,
I
did
the
comparison
with
the
correct
flame
graph
here
versus
the
versus
the
incorrect
one
and
pulled
out
regular
expression.
Fragments
of
the
incorrect
stack
frames
just
to
make
the
highlighting
easier,
because
I
didn't
want
to
do
it
by
hand,
and
I
wanted
an
easy
way
to
paste
the
same
regex
in
both
of
them.
So
so
you
can
see
very
clearly
that
all
of
the
ones
that
are
highlighted
down
here
are
wrong.
There
are
more
wrong
ones.
D
I
just
stopped
after,
like
10
or
so
and
they're,
not
all
wrong,
like
some
of
these
are
correct,
like
the
the
chainstream
server
func
1.1.1.
That's
those
are
those
are
correct,
symbol,
resolutions
in
in
most
cases,
but
many
of
the
other
ones
are
wrong.
So
this
is.
This
is
really
really
frustrating
situation
for,
for
someone
doing
analysis
to
have
incorrect
data,
it
makes
people
mistrust
appropriately
mistrust
the
tooling.
So
I
feel
like
it's
an
important
problem
to
solve.
D
We
have
as
of
a
few
days
ago,
we've
solved
it
for
italy,
because
our
giddely
binaries
do
now
have
new,
build
ids
and
in
going
going
forward,
that
means
that
we,
a
the
the
the
perf,
the
perf.data
files,
will
now
actually
contain
an
id.
So
we
won't
have
to
just
make
the
assumption
that
whatever
is
at
the
path.
D
Now
is
the
same
thing
that
put
the
path
when
the
profile
was
captured,
and
it
also
means
that
we
get
a
copy
of
the
binary
at
the
time
that
the
profile
was
captured,
added
to
the
build
id
cache
which
again
is
keyed
on
this
value,
and
so,
if
you
don't
have
a
value,
you
can't
add
it
to
the
cache
so
combined.
This
means
that
we
can
a
be
confident
that
we'll
that,
in
this
scenario,
we'll
get
unknowns
instead
of
incorrect
data,
because
the
perf.data
file
knows
exactly
what
the
binaries
id
was
and
b.
D
It
means
that
we're
very
likely
to
actually
be
able
to,
rather
than
getting
unknowns,
it's
very
likely
that
we'll
be
able
to
get
the
correct
value
because,
because
we'll
have
copied
the
correct
binary
with
the
correct
symbols
into
the
build
id
cache.
D
So
this
makes
this
makes
the
profiling
the
profiling
results
both
more
reliable
because
you
don't
get
incorrect
resolution
anymore
and
more
more
complete,
because
you're
less
likely
to
get
to
get
unknowns
for
binaries
that
do
have
symbols
which
are
go
binaries
typically
do
have
have
a
a
healthy
set
of
symbols.
So
that's
that's
what
I
wanted
to
demo
very
quickly.
B
Thanks,
I
I
have
a
question
about
that,
because
that
I'm
still
a
bit
confused
about
a
while
ago.
We
made
a
change
in
omnibus
to
make
sure
that
we
always
restart
italy
after
deploy
and
just
almost
always
yeah
from
a
practical
experience
that
seems
to
have
made
these
jumbled
flame
graphs
way
less
common.
B
So
is
this
solving
the
same
problem
differently
or
did
no.
D
That's
a
great
question:
these
are
complementary
problems
in
my
opinion,
so
the
the
frequent
research
makes
it
less
likely
to
get
unknown
frames,
but
it
still
doesn't
let
the
the
build
id
cache
get
populated
and
it
still
doesn't
actually
make
the
perf.data
file
discretely
reference.
What
binary
it
came
from.
So
if
you
ever
go
back
to
that
perf.data
file
and
try
to
reprocess
it
you're
still
very
prone
to
getting
either
missing
or
incorrect
resolution
right.
B
B
D
Right,
it
means
that
the
window
is
very
short:
it's
the
the
window.
The
window
is
for
the
window
for
getting
missing
or
incorrect
symbols
is
essentially
did
someone
replace
the
binary
between
the
time
when
you,
you
were
running
perfect.
A
B
C
D
C
Given
how
infrequently
we
deploy
for
some
measure
of
infrequent
it's
unless
you
run
it
during
a
deploy,
you
should
be
fine
currently.
D
That's
also
exactly
why
the
the
generic
scripts
that
I
have
in
user,
local
bin
do
the
same
thing.
They
pair
up
the
running
perf
script
immediately
after
the
completion
of
the
perfect
word,
because
it
avoids
this
condition
yeah.
But
I
will
say
that
it
is
super
useful
in
certain
cases
to
be
able
to
reprocess
the
perf.data
file.
D
B
You
can
you
copy
it
off
the
server
and
still
do
something
useful
with
it
or
do
you
need
to
communicate
with
it.
D
Yeah
well
I'll,
without
without
having
any
this
isn't
sensitive.
So
I
can
I'll
just
not
mention
we
have
one
of
our
marquee
customers.
Our
support
staff
ran
some
perf
records
on
their
system,
but
they
didn't
know
to
run
per
script
at
the
same
time
on
the
same
system.
So.
D
Yeah,
exactly
and-
and
they
ended
up
with
just
completely
wrong
symbol
resolution
because
they
had
run
perfscript
on
a
different
host.
Then
they'd
run
perf
record
because
they
didn't
know
any
better,
and
this
change
makes
us
immune
to
that
problem.
So
they
won't
have
wasted
time
with
incorrect
symbol
resolution.
They
would
have
instead
had
had
unknowns
for
their
symbols
because
they
were
using
the
wrong
version
of
gideon.
D
In
this
example,
and
if
gidley
has
the
new
build
ids,
then
the
perf
data
will
know
what
that
new
build
id
is,
and
thus
will
say
you
know
what
that's
not
the
right,
vibe.
B
Is
valuable
because
I
I've
looked
at
some
of
these
jumbled
flame
graphs
for
a
long
time
thinking
I
this
looks
almost
right,
but
I
know
it's
not
yes
and
then,
after
a
while,
I
I
I
sort
of
need
to
build
up
my
confidence
and
say
no.
This
is
this
cannot
be
right.
I
know
with
what
there's
too
many
things.
I
know
about
this
thing.
That
cannot
happen
here
and
it's
not
because
what
I
know
is
wrong.
B
D
D
I
think
the
another
another
place
that
I
found
it
useful
is
when,
when
you're
doing
like
a,
for
example,
if
you're
doing
a
whole
host
recording,
when
you
do
a
perfect
chord
dash
a
for
for
all
cpus
for
all
processes
and
some,
but
not
all
of
the
binaries
have
symbols,
and
then
you
go
and
install
debug
symbol
packages
after
the
facts
to
try
to
resolve
the
symbols
for,
for
some
of
the
processes
that
we're
missing
symbols.
D
Reprocessing
that
perf
data
file
is
super
unsafe
for
go
binaries
in
in
that
scenario,
because
at
that
point
you
would
have
you
know
it
would
have
been
minutes
or
hours
or
days
later,
that
you
installed
the
debug
symbols
depending
on
when
you're
doing
the
analysis.
Does
that
make
sense?
So
that's.
B
Seriously
yeah,
I
I
I
I'm
actually
trying
to
ask
more
general
questions
not
like.
Why
do
we
need
these
new
build
ids,
but
I'm
asking
the
more
general
question
of
what
can
I
do
with
a
perf
data
file,
because
okay,
there's
one.
B
But
so
one
thing
you
just
indirectly
told
me
is
that
if
you
don't
like
your
perf
script,
outputs
you
can
and
if
you
can
find
the
matching
debug
symbol
packages,
you
can
run
it
again
and
then
then
what
happens
like
where?
Why
do
those
debug
symbols
get
installed?.
D
Yeah
yeah,
so
so,
for
example,
for
all
of
our
chef
managed
servers.
The
the
my
chef
recipe
is
installing
the
debug
symbols
for
libs
for
glibc,
so
that,
because
it's
so
so
frequently
used.
D
Yeah,
it
doesn't
see
files,
there's
a
yeah,
so
there's
a
there's
a
patent,
so
usually
the
debian
packages
have
a
suffix
of
dash
dbg
for
debug
or
dbg
sym
for
debug
symbol.
There
are
two
different
conventions
for
packaging,
debug
infos
in
and
sorry
I'm
talking
specifically
about
debian
packages
here.
D
So
I'm
not
sure
how
much
detail
to
go
into
about
this.
When,
when
you,
let
me
just
give
a
super
brief
record-
and
you
can
tell
me
if
more
would
be
useful.
So
when
you,
when
you
build,
when
you
build
a
binary,
it
doesn't
matter
what
language
or
what?
What
platform?
When
you
build
an
elf
binary,
you
can
optionally
strip
that
binary
of
some
of
some
or
all
of.
D
Yeah
yeah
yeah,
so
so
you
can
optionally
take
the
the
the
data
that
we
strip
out
of
it
or
optionally,
the
unshipped
binary
and
package,
and
include
that
as
a
separate
package-
and
this
is
kind
of
conventionally
called
a
debug
info
package
like
I
think,
on
the
wretched.
B
B
B
Do
the
packagers
create
first
package
to
binary
with
symbols
so
that
strip
and
then
package
the
other
thing
and
then
give
you
a
choice?
Yeah.
D
Or
do
they
it's
a
it's,
usually
a
supplemental
thing,
so
so
there
are
gosh.
This
is
a
great
question,
so
there
there
are.
There
are
some
traditional
places
to
to
add
debug
debug
info
files.
There's
a
lot
we
can
talk
about
here,
so
so
there's
a
perf
perf
has
so
I
I've
generally
been
talking
about
purpose,
having
kind
of
two
ways
to
resolve
symbols,
and
that's
that's
kind
of
glossing
over
the
details.
D
There's
actually
like
a
list
of
something
on
the
order
of
like
15
or
16
directories
that
it
will
look
in
and
very
high
in
the
priority
list
is
that
there
is
the
directory
that
contain
that
contains
the
build
id
cache.
So
it's
really
just
one
of
the
options.
There
are
other
directories
like
like
user
lib
debug,
I
think,
is
one
of
is
one
of
those
okay,
yeah
and
and
those
comments
somehow.
B
Gets
labeled
with
the
build
id
and
then
they
can
be
joined
up
with
the
data
in
the
book.
D
Exactly
yeah
another
way
is
that
the
binary
itself
can
actually
have
the
the
elf
binary
that
that
we're
actually
doing
profiling
on
can
optionally
have
a
section
called
new
debug
link
that
that
explicitly
says
here's
the
name
of
the
file
that
contains
it
yeah
exactly,
and
it's
usually
not
a
complete.
It's
not.
It's,
usually
not
an
absolute
path.
It's
usually
just
like
you
know
the
name
of
the
file
and
and
then
the
conventionally,
that
file
will
be
optionally
potentially
installed.
D
B
D
In
that
that
marquee
customer
example,
I
mentioned
that's
exactly
what
we
did
because
we
happen
to
know
exactly
what
version
of
of
the
get
the
gitlab
on
the
bus
that
we're
running.
So
we
installed
that
got
the
correct
symbols
on
on
a
dummy
host
with
a
copy
and.
B
C
B
C
D
That's
a
as
a
very
brief
segway
to
to
build
on
igor's
example.
We
usually
do
frame
pointer
based
stacks
stack
unwinds,
but
you
can
optionally
do
there
are
a
few
other
models.
D
One
of
them
is
dwarf,
based
that's
more
expensive
to
capture
much
much
larger
in
terms
of
the
perf.data
file
and
more
expensive
to
process,
and
it
can
potentially
be
outrageously
more
expensive
to
process,
and
you
know
yeah
to
the
tune
of
many
minutes
of
cpu
time
to
to
attempt
to
to
do
the
the
resolution,
and
that's
mainly
because,
if,
if
debug
info,
if
dwarf,
if
the
dwarf
data
is
present
either
in
in
the
file
or
in
one
of
the
one
of
the
add-on,
debug
infos,
it's
possible
to
I'm
mainly
mentioning
this
as
kind
of
a
a
warning.
D
The
default
behavior
is.
If
the.
If
sufficient
level
of
debug
info
is
available
to
attempt
it,
then
perf
will
attempt
to
do
to
do
to
do
unwinding
of
inline
functions,
and
that
is
incredibly
expensive.
So
usually,
when
I'm.
B
D
With
dwarf-based
perf
records,
I
will
explicitly
disable
that
there's
a
like
dash
dash
no
inline
option,
but.
B
D
One
other
thing
I
wanted
to
mention,
since
you
were
talking
about
interest
in
in
what
what
else
could
you
do
with
perf
scripts?
Perfscript
has
the
default
output
format
is
really
useful
for
what
we
for
what
we
generally
do
with
making
flame
graphs,
but
it
actually
has
a
lot
of
different
output
formats.
You
can
use
depending
on
what
events
we're
capturing.
So
when
we're
doing
the
defaults,
the
default
timer
cpu
based
timer
events,
even
with
just
that
format.
D
With
that,
with
that
events,
as
the
as
the
recording
side,
it's
sometimes
useful
to
get
the
default
output
format
for
perf
script.
In
that
case,
it
gives
us
a
process
id,
but
not
the
the
task
id
the
thread
id.
Sometimes
it's
useful
to
get
a
breakdown
by
thread
id,
and
for
that
you
need
to
run
perf
script
again
with
with
a
different
format
string
right,
but
in
a.
D
Yep
absolutely
so,
and
I
think
that's
why
it's
nice
to
have
that
not
be
the
default
behavior,
but
there
are
some
cases
where
I
do
want
that,
like
actually,
redis
is
a
great
example.
Yes,
yes,
this
has
a
static.
D
D
B
B
Yeah
right
to
counter
that
thanks,
that's
really
useful
yeah
to
counter
that.
I
imagine
that
the
cases
where
you
need
to
do
that
once
you
know
that
you
don't
have
to
do
it
again,
because
we
would
go
if
we
find
out.
We
need
these
debug
symbols,
then
we
probably
would
change
our
provisioning
code
to
just
install
them
always
so
next
time
we
don't
need
them,
and
if
we.
B
D
Yeah,
like
I
would,
I
would
say,
I'm
not
trying
to
be
argumentative.
D
No,
no
exactly
what
I
would
do,
yeah
sure
yeah.
No,
I
agree.
I
think
that
so
kind
of
practically
speaking,
most
of
the
folks
that
are
on
call
don't
spend
a
lot
of
time
doing
profiling
and
the
the
whole
reason
that
I
made
those
usual
gold
bin
scripts
is
to
make
it
really
easy
to
capture
data.
So
I
think
it's
likely
that
that
most
of
the
on-call
engineers,
if
they
think
to
capture
a
profile,
are
probably
going
to
run.
D
One
of
the
you
know
are
probably
going
to
run
the
script
that
says
capture
everything
on
the
host
for
60
seconds,
because
it
takes
no
argument
and
it's
easy
to
do,
and
if
we
want
to
do
post
processing
on
that,
you
know
after
after
the
capture,
then
then
that's
where
I
think
it's
also
helpful
right
yeah.
D
C
C
Yes,
so
you
want
to
kind
of
look
at
it,
peek
at
it
from
different
kinds
of
points
of
view
and
they're
having
the
ability
to
go
back,
and
we-
we
actually
saw
this
recently
during
one
of
the
postgres
weird
postgres
issues
where
we
did
capture
a
perf
profile
and
we
could
then
go
back
and
look
at,
for
example,
a
specific
pid
and
cross-correlate
with
say,
postgres
logs
and
saw
kind
of
a
slow
query
in
the
postgres
log
grab.
The
pit
from
that
cross
reference
that,
with
the
profile,
see
some
of
the
stacks
of.
Why.
C
D
D
Go
binary's
the
the
thing
that
I
care
about
the
personally.
The
thing
that
I
feel
like
the
the
biggest
value
add
is
is
having
a
greater
certainty
that
you're
looking
at
correct
symbol
resolution
and
you
know,
as
a
bonus
having
it
be
more
likely
that
you
actually
get
symbols
instead
of
oh,
no,
no,
no!
No!
D
No
thanks
to
the
build
the
build
cache
compatibility,
particularly
since
we're
going
to
be
running
profiles
on
I
mean
this
isn't
just
about
italy,
of
course,
but
giddily
is
so
commonly
in
our
framework
I
mean
like.
I
really
want
this
for
all
go
binaries
like
we.
You
know
like
console,
for
example,
I
I
want
console
to
have
this,
so
we
can
get
decent.
You
know
decent
recordings
out
of
console
servers.
B
If
you
want
to
have
it
for
all
gold
binaries,
you
maybe
need
to
take
a
bigger
step
and
get
them
upstream.
Yeah.
D
Agree,
that's
that's!
That's
my
intention.
I've
spent
some
time
with
the
with
the
go
linker.
I
have
a.
I
have
made
a
stub
issue
for
myself,
but
I
haven't
filled
any
details
because
I've
been
mostly
focusing
on,
like
you
know
our
our
own,
getting
our
own
house
in
order
before
before,
pushing
something
upstream
but
yeah.
I
do
intend
to
to
to
make
an
upstream
issue
and
and
hopefully
submit
a
pull
request.
D
That's
that's
the
last
thing
I
want
to
work
on
for
this
epic.
B
D
Cool-
and
I
I
think
I
I
don't-
have
a
good
sense
of
how
much
resistance
I'm
going
to
get
on
this.
Well,
I'm
anticipating
some
arguments,
I'm
not
sure.
B
D
Yeah,
the
yes,
the
the
there
are
some
challenging
bits.
B
D
Exactly
exactly
so,
I
think
I
think
the
easiest
thing
to
do
is
to
say
is
to
let
the
the
is
to
have
the
section
populated
with
this.
D
This
is
something
that
I
think
is
is
a
common
practice
in
network
protocols
where
you'll
perform
the
checksum,
with
a
null
value
for
one
of
the
for
one
of
the
headers
and
then
popu
for
the
for
the
checksum
header,
or
just
ignore
the
checks,
no
matter
when
you're
performing
your
checksum
and
then
populate
the
the
value
in
sorry,
I'm
I'm
I'm
digressing
a
bit.
What
I'm
thinking
is
I'll
add
the
elf
section
for
the
for
the
for
the
new
build
id,
leave
the
value
and
get.
D
Zeros
pick
pick
a
size
leave
the
value
as
all
zeros,
then
let
the
go
build
id
get
calculated
with
that
as
one
of
the
as
one
of
the
elves
actions
and
and
then
backfill
the
value.
That's
that's
what
I'm
hoping
that
that
will
work,
but
I'm
not
sure
if
that's.
B
B
To
me
more
like
a
technical
issue
like,
can
you
make
it
work.
B
It
hard
to
imagine
that
they
don't
that
they
would
have
they'd
be
against
them.
It
sounds
more
like
a
technical
challenge
to
me.
Yeah.
D
I
think
so
yeah
like
I
think
that
the
interface
I
I'm
I'm
more
and
more
of
the
opinion
that
the
interface
that
they
provided
while
useful,
is
just
really
kind
of
hopelessly
broken,
because
it
forces
someone
to
pick
a
bill
id
before
the
build
actually
happens
rather
than
after
the.
B
If
you
use
dash
b
yeah
and
that's
yeah,
no,
that
that
doesn't
work,
no
one
thing
that
they
do
care
a
lot
about
is
the
speed
of
their
builds.
B
So
if,
if
in
order
to
have
these
ids,
you
get
a
performance
regression
on
build
speeds,
then
you
probably
will
get
pushed
back,
because
I
think
they
will
ultimately
they'll
kill
more
about
their
build
speeds
than
about
having
these
ids.
So
I
I
don't
know
I'm
this.
Is
there
yeah?
This
is
my
gut
feeling.
So,
okay,
that's.
B
B
I
think
he's
also,
I
think,
he's
the
author
of
go
ld,
which
is
like
one
of
the
main
open
source
linkers,
so
he's
sort
of
a
linker
expert.
B
D
Yeah,
so
that
that's
where
I'm
at
I've
got
I've
got
an
idea
of
how
to
do
it
and
I
haven't
started
implementation
yeah,
but
I'm
looking
forward
to
it.
This
is.
B
Well,
we've
been
going
over
time
and
we
should.
B
We
should
stop
the
recording
and
wrap
it
up.
I
guess.