►
From YouTube: Scalability Team Demo 2021-11-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alerts
alert,
silence,
notifications,
we
use
a
lab
manager
to
well
generate
alerts,
but
also
silence
recurring
alerts.
So
if
there's
a
known
issue
we'll
put
in
a
silence
and
the
issue
that
the
uncle
often
faces
is
that
the
silence
expires
after
some
time
that
we
put
in
so
you
know
it
might
be
one
week
or
two
weeks
and
if
the
issue
still
exists,
then
they
still
get
paged
for
it
or
get
paged
for
it
again
and
since
the
person
who
was
on
call
two
weeks
ago
has
all
the
context
around
this.
A
Now
this
person
gets
paged
and
they,
you
know,
re
reinvestigate
the
issue
just
for
someone
to
come
along
and
tell
them
here's
the
known
issue,
and
so
we
waste
a
lot
of
time,
and
so
we've
got.
A
A
it's.
It's
basically
a
cron
job
that
at
the
beginning
of
each
shift
posts
to
the
production,
slack
channel
the
silences
that
are
about
to
expire
during
the
next
eight
hours
during
the
next
shift,
so
that
the
person
coming
on
call
has
the
context
for
all
of
the
long-standing
issues
that
might
page
them
over
the
next
eight
hours
and
this
went
live.
I
think,
just
towards
the
end
of
last
week
and
we've
already
seen.
A
Let's
make
this
a
little
bit
smaller
there
we
go,
so
this
is
kind
of
what
what
that
looks
like
and
we've
already
gotten.
Quite
a
few
of
these
notifications,
which
I
think
also
highlights
how
heavily
we're
relying
on
silences,
something
that
we
didn't
have
that
much
visibility
into
before.
A
But
I
think-
and
I
guess
I
just
sort
of
search
for
the
following
silences
yeah.
So
so
we
can
already
see.
There's
there's
quite
a
few
and
I
guess
yeah.
The
response
to
some
of
these
has
also
been
quite
positive.
So
yeah,
that's
that's!
Basically
it
it's
a
relatively
simple
change,
but
I'd
say
it's
already
having
a
positive
impact.
B
Yeah
what
I
like
about
that
is,
like
you
mentioned,
that
the
number
of
silences
giving
us
a
better
idea
of
like
what's
going
on
there,
but
also
like
if,
as
that
builds
up
visually,
it
will
start
to
really
dominate
the
the
channel,
which
will
be
a
good
sign
for
us
to
be
like
hey.
Why
is
this
list
so
long?
A
A
A
A
Yeah
yeah
yeah,
yes,
so
here's
his
kind
of
the
the
other
side
of
that
where
we
we
cross
link
this
incident
issue
to
the
silence
as
well.
D
I'm
curious,
we
don't
have
any
validation
between
the
incident
or
the
issue
that
we're
referencing
being
open
and
the
silence
do
we.
B
C
B
A
A
D
Can
we
make
a
change?
I
and
I
think
the
answer
is
yes
here,
but
it
depends
on
how
much
work
it's
going
to
be.
That
tells
you
how
many
times
an
alert
would
have
been
triggered.
I
mean
we
can
look
this
up
in
prometheus
to
some
degree
how
many
times
an
alert
would
have
been
triggered
since
the
start
and
include
that
in
the
silence,
so
you
could
say
inhibited
or
silenced,
10
alarms,
10
alerts
in
or
in
the
last
day
it's
silenced
like
one
alert
or
whatever
and
we
can
use.
A
D
D
D
It's
got
the
ideally,
but
it's
got
it's
got
on
the
it's
at
least
an
alert
manager.
It's
got
the
current
alerts
that
it's
trapping,
but
I
don't
think
it's
got
any.
E
D
So
there
it,
but,
but
I
mean
because
what
you
might
find-
I
mean
this
is
kind
of
yagni,
but
you
might
find
that
people
just
start
automatically
refreshing
silences
at
the
beginning
of
their
shift,
not
knowing
whether
or
not
they.
So
maybe,
if
we
add
it
in
there
like
this,
hasn't,
alerted
or
also
this
is
alerted
ten
times
and
we're
silencing
it
onto
a
closed
issue
like
that's,
also
not
good.
A
Yeah,
I'm
not
sure
how
easy
it
would
be
to
do
that,
like
the
the
how
often
it
would
have
alerted.
I
guess
we
we
could
try
and
apply
the
alerting
rule
as
a.
D
Query,
if
you
take
the,
if
you
take
the
labels
and
you
look
on
alerts
for
state
counts
over
time,
plus
one.
It's
there's
a.
If
you
google,
it
there's
a
there's,
a
way
that
you
can
look
how
many
times
that
alert
would
have
fired
for
for
those
set
of
labels,
and
then
that
would
be
your
account.
D
C
D
D
There's
there's
it's
more
complex
than
that
because
once
the
you
might
get
52
alerts
and
then,
when
those
52
alerts
get
to
alert
manager,
hey
sorry,
my
dogs
are
fighting
in
my
office
and
they
will
condense
those
down.
So
we
have
what's
called
group
by
rules
and
it'll,
take
those
52
alerts
and
say
well
we're
going
to
turn
that
into
one
signal.
Sorry.
D
A
C
C
C
D
B
I'm
just
going
to
do
a
bit
of
bite
on
that
igor.
I
was
just
looking
at
the
message
today
and
I
don't
know
if
it's
worth
doing
it
in
woodhouse,
but
like
all
of
those
silent,
that's
like
what
one
two
three
four
seven
six
silences,
but
they're
all
actually
related
to
the
same
issue
and
I'm
wondering
if
it
would
be
possible
to
do.
We
already
talked
about
that.
B
A
Yeah,
that's
a
that's
a
good
point.
I
I
hadn't
considered
that
I
think
I'd
like
to
gather
a
little
bit
more
data
on
that
to
see
it.
E
A
Yeah
the
same
issue,
I
mean
in
my
experience
it's
usually
one
to
one
and-
and
I
guess
we
kind
of
have
that
mapping
via
helicopter
as
well,
because
helicopter
does
do
that
grouping
per
issue.
So
we
we
might
be
able
to
get
some
historical
data
on
that
as
well
to
see
if
it's
worth
it.
D
A
Yeah,
I
mean
I
mean
we
have
quite
a
few
of
these
tools
like
there's,
there's
handover
generation,
there's
newsletter
generation,
there's
a
whole
bunch
of
them
and
they're
deployed
in
various
different
places,
and
I
I
do
think
there
is
some
value
to
bringing
all
of
that
into
woodhouse.
I
mean
that
that's
kind
of
what
woodhouse
was
designed
to
be
it's
like
the
central
place
for
this
type
of
thing.
D
And
but
that
sort
of
there's
also,
this
other
incident
reports
that
I've
been
talking
about
us
needing,
and
I
spoke
to
rachel
and
I
spoke
to
you
about
illegal
and
maybe
that
would
make
more
sense
just
to
build
because
we're
going
to
build
that
from
scratch
and
there's
so
much
other
incident
reporting
already
in
woodhouse,
or
at
least
some
of
it.
Maybe
that
I
know
rachel.
You
asked
that
it
was
written
in
ruby,
but.
D
D
I
might
I
might.
I
might
look
at
that,
because
that
was
that
other,
that
other
presentation
that
I
put
together
the
other
day
with
all
the
the
incidents
and
the
causes
of
them.
The
takeaway
from
that
was
that
we
really
need
to
kind
of
look
at
at
triaging.
The
the
labels
on
there
and.
C
Having
things
look
better
yeah
and
having
a
proper
process
around
that
like
because
I
fully
agree
like
I
really
want
that
incident
data
to
be
good.
I
think
that
the
only
way
to
get
it
good
is
to
triage
it
and
to
have
a
process.
C
It's
just
a
case
of
who's
responsible
for
it,
and
when
does
it
happen,
and
I'm
hesitant
to
ask
on-call
engineers
to
do
even
more
at
their
end
of
their
shift,
because
it's
already
hard
to
get
people
with
the
on-call
process
and
back
into
whatever
project
they're
doing
and
then
adding
an
extra.
Oh
you've
got
to
triage
everything
like
this.
D
I
was
wondering
whether
it
would
make
sense
to
in
the
same
way
that
we
review
the
infradave
report
in
the
engineering
allocation
once
a
week,
whether
we
should
have
it
in
the
incident
review
session,
and
we
can
kind
of
because
we
that
the
data
that
we
have
in
the
infradev
report
is
really
useful
and
kind
of
sets.
The
tone
of
the
of
the
engineering
allocation
calls
like
this
is
where
we're
not
meeting
slo
and
everything
like
that,
and
we
could
almost
have
the
same
thing
on
the
incident
review
where
it's
like.
D
You
know,
these
are
the
labels
that
we
commonly
seeing.
You
know
this
is
whatever
you
know,
whatever
data
we
choose
to
put
in
there
and
and
so
having
it
in
the
in
the
instant
review,
call
and
and
piggybacking
on
that,
but.
C
D
This
is
the
data,
that's
less
than
two
weeks
old
that
hasn't
been
marked
up
properly
yet
and
like
this
is
how
many
things
need
to
be
fixed
but
like
within
a
two
week
period,
so
you're
not
putting
immediate
pressure
on
the
engineer
on
call,
and
then
you
know
further
down
you
have,
and
this
is
like
because
the
thing
is
incidents
are,
are
rare
enough
that,
in
order
to
get
any
statistical
information,
you
can't
be
looking
at
the
last
like
week
or
two
weeks
anyway.
D
C
I
know
it
sounds
like
I'm
arguing
a
lot
or
disagreeing
a
lot
about
it
and
I'm
not,
but
I
feel
like
if
you
give
people
two
weeks
to
do
it,
it
won't
get
done
because
again,
people
move
on
to
the
next
thing.
I'm
trying
to
think
so.
The
development
department
also
uses
like
labels
as
metrics
or
like
they
also
use
issues
and
merge
requests
to
produce
the
metrics
that
they
use
to
run
the
department,
and
I'm
wondering
who
has
the
responsibility
on
their
side
to
make
sure
that
those
things
are
up
to
date.
D
D
That,
because
there's
a
different
time,
you
know
being
on
call,
is
a
much
more
synchronous
process
and
and
and
being
on
call
is
something
that's
time
staffed.
So
we
don't
want
to
ping
people
immediately
and
say
you're
in
the
middle
of
an
incident.
Please
add
these
17
labels
to
your
incident
right
now
and
they
better
be
right.
D
C
D
D
C
Automating
the
fact
that,
like
generating
the
list
of
work
that
is
needed
to
be
done
like
these
are
the
list
of
issues.
These
are
the
ones
that
don't
have
what
we
need.
These
are
the
ones
you
have
to
look
at
and.
C
Yeah,
but
for
the
for
the
incident
information,
it's
a
bit
different
yeah.
As
you
said,
you
can't
do
it
at
the
time.
C
Putting
it
on
the
ends
is
slightly
different,
like
there's
less
of
a
support
structure
around
around
that.
So
it's
a
tough
one.
A
Yeah
I
mean
if
it
turns
into
sort
of
a
group
paperwork
exercise,
then
that
could
also
make
that
call
kind
of
more
boring.
So
that's
kind
of
the
risk
that
I
see
there.
I
think.
D
It's
going
to
happen
before
like
it's
got
to
happen
as
like.
Maybe
that's
the
the
deadline
for
it
and
it's
like
in
this,
like
I
don't
do
the
labels
on
the
infrared
issues
on
the
call.
I
do
it
like
an
hour
before
the
call
and
and
if
we
split
that
up
between
I
don't
know
the
person
you
know
it's
anything.
That's
between
one
and
two
weeks
old
is
going
to
be
in
is
due,
and
it's
up
to
the
engineer
on
call
who
took
the
original
call
to
put
those
labels
on
or
something
like
that.
C
D
C
Cool
is
there
anything
else
anyone
would
like
to
demo
or
would
people
like
some
time
back.
E
I
missed
the
start
of
the
call,
I'm
I'm
one
yeah.
Maybe
I
want
to
talk
for
a
little
bit
about
something
I've
been
investigating,
because
it's
on
my
mind
that
maybe
people
have
ideas.
E
So
we
had
this
long
string
of
recurring
incidents
with
file
43,
which
hosts
a
repository
of
a
known
customer,
and
we
now
know
that
actually
all
git
lead
servers
host
a
repository
from
this
customer,
but
more
than
one,
but
if
file
43
had
incidents
and
they
seem
to
be
centered
around
that
one
repository
and
we
also
believe
that
they
had
to
do
with
one
rpc
and
then
one
gitly,
rpc,
and
then
people
made
application
changes
on
the
client
side
so
that
rpc
gets
called
less
and
then
it
seemed
that
the
incident
stopped,
but
they
didn't
because
the
past
few
days
we
had
a
couple
of
basically
the
same
incidents
again
with
the
same
symptoms.
E
So
the
system
goes
into
100
almost
near
95
percent.
Cpu
saturation
and
profiles
look
the
same,
so
maybe
it's
happening
less
often.
Maybe
we
never
knew
what
the
trigger
was.
Maybe
there
are
multiple
triggers
for
it,
but
I
started
looking
at
this
again
and
I
only
found
out
about
it
because
I
wanted
to
tear
down
the
script.
I
made
that
automatically
profiles
the
server
when
the
load
goes
up
and
I
thought,
let's
see
if
it
did
anything
in
the
past
week
and
then
it
actually
did
something.
E
E
I
don't
know
two
or
three
days
yeah
exactly
it
was
like
on
the
ninth.
It
was
yesterday
and
it
seems
to
be
in
the
well
what
I
would
call
the
night
so
in
the
the
one
two
two
am
utc
range,
but
maybe
that's
too
specific
and
yeah.
So
I've
been
trying
to
look
at
trying
to
understand.
E
What's
going
on
and
learning
things
about
the
linux
kernel
and
it's
it's
a
bit
of
a
mystery,
because
what
happens
is
I
talked
to
sean
about
this
yesterday
to
see
if
I
served
some
of
the
graphs
up
yeah,
so
I
have
an
example.
Some
example
data
here.
E
This
is
from
one
of
these
incidents,
so
this
was
yesterday.
I
don't
know
if
we've
had
another
one
and-
and
this
is
this-
is
the
incident
quite
clearly.
This
is
the
total
cpu
usage
across
the
whole
course
of
the
machine,
and
if
that
is
at
93,
90.
E
Percent,
that's
really
bad,
and
one
of
the
other
interesting
symptoms
is
that
network
utilization
usually
drops
and
it's
kind
of
weak.
So
the
system
is
not
sending
out
a
lot
of
data
over
the
network.
E
E
Luckily,
the
back
button
works
yeah
and
there's
two
ways
we
can
get
alerted
on
this,
because
one
is
that
the
this
aptx
dips
and
the
others,
of
course,
that
these
errors
go
up
and
the
errors
are
probably
yeah
response
spawn
timeouts.
E
So
that
makes
sense,
but
that,
but
when
spawn
timeouts
are
happening,
things
are
already
bad
for
a
while.
So
I
think
the
aptx
is
really
a
reliable
indication
that
something's
wrong
and
the
typical
picture
you.
This
is
another
graph
that
we
know
means
something's
wrong
if
scheduling
weighting
is
high,
as
is
this,
which
is
kind
of
the
same
thing,
but
with
this
guy
all
mixed
in,
but
this
ko
is
not
a
problem
in
this
incident
and
the
yeah.
E
If
you
look
at
what
what
processes
are
active,
it
is
kittley
and
get
the
plot
back,
so
they
are
competing.
Are
they
are
both
on
the
cpu
almost
all
of
the
time,
and
so
this
is
from
yeah
344
a
moment.
I
have
a
profile
for
this.
I
just
need
to
find
it.
I
didn't
prepare
this.
E
E
So
this
is
a
profile
from
344
on
on
that
server.
So
that's
the
start
of
the
incident
and
what's
interesting,
is
that
what
I
find
interesting
is
that
these
well,
when
I
was
talking
to
sean,
I
said
raw
spin
unlock
irq
restore
is
very
busy,
but
I
now
understand
that
that
is
just
a
symptom.
E
That
is
a
technical
detail
of
how
the
flame
graphs
work
when
this
function
there
are
times
when
the
flame
graphs
can't
see
what's
going
on
and
when
they
start
seeing.
E
What's
going
on
again,
it
looked
like
everything
happened
in
this
function,
but
there's
stuff
missing
up
here
that
we
can't
see
with
the
flame
graph,
but
what
you
see
right
below
it
is
a
hint
of
what's
going
on,
so
this
wake
up,
sync
key
thing
and
the
so
the
weird
thing
here
is
that
this
is
kit
upload
back
it's
trying
to
write
data
and
it's
spent
all
that
almost
all
the
time
it's
spending
there
I
mean
some
of
this
time
is
spent
on
this
is
just
for
interrupts
for
the
soft
interrupts.
E
Data
new
text
log,
where
what
else
is
here
copy
page
0.02-
I
don't
know
if
you
can
see
it
on
the
screen,
so
this
tiny
slither
is
actually
copying
data
and
all
the
rest
of
this
is
trying
to
wake
up
the
process
on
the
other
end
and
the
process
on
the
other
end
is
italy,
and
that's
this
thing
well
that
actually,
that
those
are
these
read
calls
and
here
getaly's,
trying
to
wake
up,
get
the
blog
back,
so
they're,
going
back
and
forth
trying
to
wake
each
other
up
and
they're
spending
most
of
their
time,
doing
that
not
sending
any
data
out
of
the
network.
E
E
So
it's
it's
not
like
getting
that
data
out,
just
in
intrinsically
a
lot
of
work,
because
getly
hooks
can
do
it
with
this
much
work,
but
then
it
goes
from
hitler
hooks
into
into
git
upload
back
and
that
so
here
it
is
reading
from
gitly
hooks.
That
is
also
not
a
lot
of
work,
but
then
the
same
bytes
have
to
get
written
to
the
gitly
process,
and
then
we
get
this
big
chunk,
which
is
two
separate,
writes.
E
E
So
that's
what
I've
been
looking
at
and
I
started
writing
a
little
program
to
simulate
this,
where
I
have
a
go
process
that
reads
from
a
bunch
of
sub-processes,
and
I
have
a
little
c
process
that
writes
data
and
the
same
trunk
size
as
git.
Does
I
try
to
run
a
lot
of
them
at
the
same
time
just
to
send
data
through
a
pipe
because
it
I'm
this
makes
me
wonder
like?
Is
there
a
problem
with
sending
data
through
a
pipe
because
we're
not
spending
a
lot
of
time?
E
Reading
the
data
from
disk
we're
not
spending
a
lot
of
time
in
git,
compressing
the
data
or
traversing
object,
graphs
and
finding
the
data?
It's
just
we're
ridiculously
busy
sending
it
through
pipes.
So
that's
a
mystery
and
thanks,
I
I
have
an
issue
for
it.
I
haven't
written
a
lot
of
comments
on
it,
but
I'll
post
more
there.
E
Next
week
I
have
a
week
off,
but
well
hopefully
we
can
figure
out
what's
going
on
here,
but
some
something's
not
right,
and
I
feel
that
it's
important
to
chip
away
at
this
problem.
Yeah.
A
Well,
one
of
the
questions
that
comes
to
mind
looking
at
that,
maybe
you
already
have
an
answer
to
this-
is
the
the
profile
doesn't
tell
us
how
often
these
things
are
being
invoked
right
like
it
just
tells
us
sort
of
statistically
a
percentage
of
time.
So,
looking
at
how
many
right
calls
do
we
have
how
many
of
these
processes
do
we
have
and
like
is
it
just?
Is
it
just
many
of
these
doing
it
for
a
short
period
of
time,
or
is
it
a
few
of
them
doing
it
for
a
long
period
of
time?
E
The
process
count
is
around
60,
which
is
the
limit
per
repository.
There's
also
these
things
down
here.
What's
that?
Oh
that's,
18,
italy
processes,
that's
because
it
can't
fork
fast
enough,
so
there
are
a
bunch
from
looking
at
logs
and
other
incidents.
E
I
think
they
tend
to
be
long
running
because
they
can't
get
their
job
done
because
they're
cloning,
a
large
repo
like
this
particular
project,
has
a
lot
of
the
fetches
are
one
gigabyte
and
if
the
the
system
isn't
moving
very
fast
and
doing
a
lot
of
them,
then
you're
going
to
see
individual
one
gigabyte
clones
that
take
a
long
time
in
other
cases
in
this
graph
you
can
see
it,
but
I
I
sometimes
also
see
60
processors
at
the
same
time
and
everything's
fine.
E
A
What
also
just
jumped
out
at
me
yeah
is,
and
maybe
this
is
normal,
but
on
the
on
what
you
just
had
open.
A
E
It's
it's
goodly
hooks
and
that
might
be
just
a
side
effect
of
the
go
runtime.
So
how
many
guitar
hooks
are
there's.
E
A
E
Localization
these
threats
are
not,
most
of
them
are
idle.
A
A
And
and
one
other
idea
that
I
had
just
to
get
it
out
of
my
head
for
the
cpu
profile
if
we
captured
the
pid-
and
I
think
we
do
capture
that
by
default,
we
could
do
a
flame
graph
that
includes
the
the
pid
yeah
and
see
if
it's
correlated
to
a
single
process.
I.
E
I've
been
wondering
that,
because
there's
a
there's,
some
some
some
sort
of
weird
many
to
one
thing
going
on,
where
there's
many
back
processors
talking
to
one
gitly
process
and
they're
breaking
each
other
up
back
and
forth.
But
like
you
say,
maybe
it's
just
one
of
them.
That's
waking
up
quickly,
all
the
time
or
maybe
they're
all
waking
them
up,
or
we
don't
really
know
what's
going
on
there.
E
Also.
This
is
a
different
dashboard,
but
these
pictures
are
exactly
the
same
as
the
old
old,
the
same
file,
43
incidents
that
we
had
before,
where
we
see
that
the
all
the
cpu
time
is
spent
in
the
kernel,
and
that
adds
up
with
that
picture,
because
if
you
look
at
this
flame
graph,
all
this
is
kernel
so
right
once
it's
in
these
syscalls.
E
The
orange
means
must
mean
something:
that's
soft
irq,
because
that
also
goes
up,
but
I
don't
know
yet
what
exactly
what
it
means.
One
thing
is
that
you
were
asking
about
writes.
So
if
you
look
at
system
context,
switches
context,
switches,
rights
are
context
switches,
but
there
are
context
switches
that
are
not
rights.
E
But
if
you
look
at
the
start
of
the
incident,
it
is
not
marked
by
a
big
increase
in
context
switches.
There
is
definitely
something
going
on,
but
context
switches
only
go
up
here
and
then
the
thing
stops.
So
it's
not
like.
We
have
a
zillion
rights.
All
of
a
sudden.
That's
what
I
was
trying
to
show
here
so
ongoing
mystery
thanks.
A
What
one
one
on
something
to
potentially
look
at
if
we
open
up
that
profile
in
flame,
so
what
what
you
just
showed
was
that
the
behavior
kind
of
changed
over
the
course
of
the
incident
right.
The
cpu
utilization
remained
high,
but
we
kind
of
start
out
with
low
context
switches
and
only
halfway
through.
Do
we
actually
start
going
up.
E
E
E
yeah.
Here
it's
all
it's
dominated
by
socket
right.
So
this
is
a.
This
is
a
very
different
profile
from
the
first
one.
A
E
Then
the
first
one,
it's
all
I'm
trying
to
wake
up
somebody
on
the
other
end
of
the
pipe
and
there
are
socket
rights
here,
but
not
that
many
and
then
here
the
socket
rights
are
dominating
tcp.
E
Yeah-
and
I
wonder
if
what
we're
seeing
here
is
that
somehow
the
the
socket
rights
get
unblocked
and
it's
starting
to
send
a
lot
of
data,
but
where
are
we
buffering
that
somewhere
like
do?
We
even
have
have
room
to
buffer
all
of
that?
Well,
if
we
did,
then
giddily
should
get
bigger,
but
italy
is
this
one
at
four
gigabyte,
it's
not
really
moving
during
the
incident.
E
It's
not
like
gitlie's
ballooning
up
all
this
data
from
the
back
processes
and
all
of
a
sudden
starts
sending
them
through
the
tcp
sockets,
I
think
yeah,
but
yeah.
It
seems
something
gets
blocked
and
everything
is
just
doing
these
stupid
wake
ups
and
then
it
gets
unblocked
and
then
we
start
sending
more
data
and
then
then
we're
good
again.
You
also
see
it
in
the
network
graph.
E
You
see
this
peak
here
and
this
is
probably
trailing
because
it
looks
back,
it's
probably
a
window
because
it's
a
counter
the
network
transmit
rates,
so
the
number
would
be
a
rate
over
one
minute
or
two
minute
or
five
minute
window.
So
the
actual
the
fact
that
this
speaks
after
we're
done
actually
it
sort
of
lines
up,
because
the
other
profile
is
54.
So.
C
C
Great
well,
if
that's
all,
that's,
if
that's
all
on
the
cards
for
today,
I'm
gonna
go
ahead
and
say
thanks
very
much
for
the
conversation
I'll
upload,
the
video.
I
hope
you
have
a
good
rest
of
your
thursday.