►
From YouTube: SIG - Performance and scale 2022-01-20
Description
Meeting Notes: https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.yg3v8z8nkdcg
A
Okay
share
the
document
link
in
chat
all
right,
welcome
to
scale.
Please
add
yourself
as
an
attendee,
david
and
jen
yeah.
That's
all
we
have
today
yeah-
and
please
add
yourself-
I
I
do
say
this
every
time,
but
it
is
important.
Just
because
of
you
know
we
do
have
varying
attendances
of
these
calls,
but
it's,
but
I
want
to
make
sure
that
it's
always
reflected
here.
A
Okay,
so
I
have
two
things
for
today
I
was
hoping
marcelo
would
be
here,
but
that's
okay,
it's
not,
but
here's
two
pr's
that
were
opened
so
both
of
marcel
and
I
were
looking
at
this
trying
to
figure
out.
What's
going
on
with
this
periodic
job.
We
have
two
different
takes
on
this.
I
can
start
with
mine
and
maybe
so
we'll
join
to
talk
a
little
bit
about
kind
of
what
he's
been
looking
at.
A
So
I
I
was
doing
some
testing
locally
and
I
have
some
looking
at
some
blogs
and
stuff
and
trying
to
figure
out.
What's
going
on.
One
of
the
things
I
was
focusing
on
is
this:
this
range
selector
piece,
which
is
the
which
is
actually
this
piece
of
code
right
here
like
this
bracket
and
then
a
duration
right
after
right
after
the
metric
and
what
I,
what
I've
seen
and
what
I've
been
reading
about,
is
that
this
this
has
an
effect
on.
A
You
know
the
data
that
we
observe
and
this
this
blog
was
talking
about
and
how
it
it
affects,
sort
of
the
what
you
see
like
the
you
can
see
like
a
lot
in
this
graph.
We
can
see
like
it's.
The
peaks
and
valleys
are
much
sharper
like
the
the
duration
of
data
like
when
it
turns
it
into
a
like
a
range
vector.
It's
the
duration,
is
it's
much
shorter,
so
that
the
change
in
data
is
is
much
sharper
and
I
plug
this
into
the
like
the
prometheus.
A
You
know
we
have
running
this
locally
and
I
did
a
20,
a
20
vm
test,
and
I
saw
totally
different
values
based
on
the
duration,
and
so
I
looked
at
our
tests.
Our
test
usually
runs
in
like
a
one
to
two
minute
range
somewhere
in
there
and
it
varied
significantly.
Here
is
the
two
metrics
I
posted,
so
this
was
with
a
five
minute
range
factor.
I
see
22.
I
expect
so
I
did
20,
so
I'm
gonna
expect
it
to
be.
You
know
right
around
20..
A
A
So
my
thought
was
that
okay,
this
could
be
affecting
how
we're
reading
the
data,
especially
if
we're
looking
at
thresholds.
We
probably
want
this
to
be
consistent,
someways.
I
don't
think
we're
gonna
get
the
right
numbers
here.
So
I
was
thinking
that
this
this
needs
to
be
set
to
something
consistently.
I
said
five
minutes
seemed
to
be
the
stabilize
the
value
quite
a
bit.
10
minutes
basically
brought
this
down
to
like
21,
which
you
know
I
don't
know
it
didn't
seem
like
it
did
much
more.
A
So
I
was
I
kind
of
settled
at
five
minutes.
So
that's
what
this
does
and
I
I
want
to
try
and
see
if
this
has
any
effect.
This
is
what
this
is
one
of
the
theories
going
on
that
could
be
affecting
this
job,
so
I
I
don't
know
if
you
I
don't
know
if
you
guys
have
any
thoughts,
but
that's
that's.
This
could
be
it,
but
I'm
not
I'm
not
sure
again
until
I
want
to
see
it
run
just
to
see
for
sure
yeah.
B
B
Just
to
clarify
the
only
change
you're
proposing
here
is
changing
the
the
number
like
the
the
time
value
in
that
bracket.
The
end.
A
Of
the
it's,
it's
two
things,
it's
actually
so
it's
it
is
that
it's
hard
coding
that
value
it's
also,
I
what
I
added
is
an
offset
to
offset
just
kind
of
moves
us
back
in
time.
So
all
I
want
to
do
is
like
I
want
to
go
back
to.
A
A
B
Okay,
that's
I
mean
I
understand
what
you're
doing
I'm
still
confused
on
how
prometheus
has
somehow,
I
guess
so
the
sum
and
the
increase
there.
I
would
expect
giving
a
time
interval
that
we're
getting
the
sum
of
everything
that
has
occurred
during
that
time
period,
so
the
increase
is
supposed
to
be
a
count
starting.
You
know
five
minutes
back
and
we're
summing
that
just
odd
that
we
get
different
behaviors.
A
Yeah,
I
don't
know
yeah
two
minutes
yeah
like
I.
I
have
no
idea
what
one
minute
like
this
is.
This
is
the
the
part
that
concerned
me.
The
most
is
like
yeah
when,
except
the
only
thing
that
I
thought
about,
that
still
is
not
clear
to
me
is
like.
Maybe
there
are
two
crate
requests
happening
because,
like
this.
A
40.,
so
maybe
we're
summarizing.
Maybe
it's
it's
summarizing
like
the
increase
and
it's
we're
actually
at
40.
I
don't
know,
I
don't
know
what
to
make
of
this.
But
to
me
I
only
created
20
things.
I
expect
20
requests.
So
I
know
this
seems
like
the
most
correct
or
closest
to
the
correct
value.
So
I
I
don't
know.
A
Yeah,
I
don't
know
it's
kind
of
kind
of
strange,
but
I
know
that
that
was
that
was
the
theory
that
I
haven't,
maybe
like
it
could
stabilize
it.
The
other
thing
is
like
just
given
that
what's
kind
of
scary
is
like
if,
if
we're
getting
different
values
for
these
it'll
be
impossible
to
do
thresholds
if
we
can't
have
a
consistent
measurement
backwards,
so
I
think
regardless.
We
need
to
do
this
to
something
something
we
consider
to
be
reasonable,
yeah
to
measure.
But
I
don't
know
what
it
is.
A
I
think
five
might
just
be
something
we
can
start
with
go
ahead
or
marcelo.
C
Yeah,
if
we
want
you,
you
know
to
get
the
exactly
interval
that
we
run
the
test.
I
think
maybe
we
should
use
query
range
so
because
it
you
can
put
the
start
and
the
end,
and
then
you
be
sure
that
you
collect
you
know
you
you're.
Not
doing
like
you
know
five
minutes
ago
or
one
hour
ago
is,
will
be
exactly
the
time
that
the
test
executed.
A
Yeah,
but
like
so,
this
is
my
concern.
Marcelo
is
like
when
I
look
at
this
and
look
at
this
graph
like
create
like
if
I
say
what
just
if
we
have
this
range
vectors,
that's
one
minute
like
there's
no
value
for
this
over
here,
like
we
get
nothing
like
there's
no
metric,
there's
no
create
requests.
A
If
we
gather
here
here
anywhere
over
here
and
the
test
is
running
here
like
create,
is
done
here
and
like
so,
it
has
to
be
like
right
when
create
is,
is
done
like
when
not
necessarily
when
the
test
is
run
like
it's
when
creatives
like.
That's,
why?
I
think
that's
my
concern
like
with
this
value
being
too
sharp
like
we're,
gonna
miss
this.
I,
like
I
don't
know
if
we're
gonna
accurately
get
this.
A
C
C
A
Well,
you
see
here
on
the
graph
like,
if
I
I
mean
I
can
run
this
over
here,
I
don't
actually
have
that
set
up
to
do
it,
but
the
if,
if
you
look,
if
I
were
like
I
have
my
I
mean
you
can
see
that
how
tiny
this
is
like
this
line.
A
This
line
stops
right
here.
There's
no
data
here
like
there's.
No
there's
no
creates
that
or
that
it
gathers
over
here
like
like.
If
I
were
to
look
at
the
value,
there
would
be
nothing
here.
You
know
like
there's
a
larger
period
of
time
and
you
hear
it's
over
whatever
five
minutes
or
so
that
will
have
this
aggregated
value
that
we
can
capture.
A
Yeah
like
this
is
our
test.
Usually
that's
like
our
test
time,
two
minutes
or
so
like
we're
concluding
yeah
yeah,
I
just
don't
so
I
mean
we
could
do
that.
What
you're
suggesting
if
it
gives
us
like
all
the
values
I
mean
and
then
what
we
do
is
we
just
I
mean
I
don't
expect
this
to
change
like
it
should
just
be
like
we're
doing.
40
crate
requests
right
like
it's
not
like
we're
doing
one
two
three
like
we're
just
it
should
just
be
40..
A
C
You
you
got
40
in
this
timestamp,
you
know
if
we
have
this
five
minutes
it.
It
looks
like,
for
example.
Here
is
the
what
actually
happened?
Isn't
it
the
12
20,
something
you
got,
40
crates,
the
other
ones
that
actually
make
the
line.
Longer.
It's
misleading
in
the
end.
Isn't
it
because
you're
not
doing
40,
you
know
for
the
crates
many
times
many
different
minutes.
C
A
Right
so
does:
are
there
two?
Are
there
two
for
a
pod
when
it's
a
create
request?
Are
there
two
posts
requests?
I
didn't
think
so.
I
think
there's
one
like
that's
what's
odd
about
this,
why?
Why
would?
Why?
Is
there
two
like
this
shouldn't,
be
40.?
That's
why
I
don't.
I
don't
understand
this
value.
Why.
C
D
Sorry
yeah,
I
don't
know
why
it's
a
double
here.
A
Yeah,
I'm
not
I'm
not
sure
what
to
make
exactly
of
this
number,
but
I
don't
know
this
is
the
closest
that
I
don't
know
I
just
I
don't
want
to
fit
the
data.
If
it's
not,
you
know
the
reality,
but
I
don't
know
why
this
it
gets
down
like
this
is
the
number
what
I
expect
you
know
this
is
just
closer
to
20.
It's
about
what
I
expect
yeah.
A
Okay.
Well,
I
mean
I
guess
how
about
what's
the
thing
you
call
it
it's
a
quick.
It's
what's
the
thing
you
said
there
that
look
that
shows
all
the
data
points
marcelo.
It's
like
a
query,
something
yes,.
C
Bigquery
range,
but
it's
it
should
when
those
parameters
should
do
show
something
similar.
You
know
if
you
wish
for
me
to
just
put
this
query
there
and
it's
possible
to
define
an
interval.
C
You
know
less
time,
for
example,
interval
that
you
are
basically
the
query
range
you
can
define
a
specific
interval
instead
of
being
like
the
current
time
last
10
minutes
ago.
You
know,
we
can't
just
say
oh
between
this
just
two
times
here
so,
but
it
should
be.
It
should
show
something
similar
while
you're
saying
here,
it's
just
more
flexible
to
require
things
that
you
run
before
so
just.
A
Yeah,
so
how
about
this?
I
I'm
going
to
look
at
the
query
range.
I
want
to
see
the
exact
data
that
it's
getting
just
to
see
and
just
to
see
what
else
you
can
find
about
this.
C
A
C
C
Okay,
so
this
one
is
actually
yeah.
This
was
a
request
from
federico
before,
because
the
the
performance
jobs
in
the
pro
job.
It
has
a
lot
of
logic
inside
you
know
to
install
kubfur
and
run
the
jobs
and,
and
federico
said
that
we
shouldn't
have
a
lot
of
you,
know,
logic
inside
the
pro
jobs
and
actually
have
scripts.
For
that.
So
this
is
this
is
the
the
work
for
that.
So
in
the
end
is
right.
C
Now
we
have
the
script
in
the
hex
folder
to
run
the
performance
job,
but
with
the
functional
test,
but
I
create
a
new,
a
new
file
that
actually
use
the
perfscale
load
generator
to
run
the
performance
test.
Okay,
so
because
the
plan,
as
we
discussed
before,
is
to
replace
the
functional
test
with
this
tool
and
and
also
automate,
you
know
an
automated
folder
which
now
it's
has.
A
script,
should
deploy
the
functional
test
in
the
in
the
cluster,
but
we
need
those
to
have
some
logic
for
run
the
performance
test.
C
A
It's
this
issue
that
you
created.
I
I'm
glad
you
talked
about
this
one.
We
I
want
to
review
this
one,
but
I
think
is
this
the
the
issue
this
was
the
one
I
meant
to
talk
about.
Was
the
the
missing
pod
delete
and
create
events?
Oh.
C
A
C
B
He
is
gone,
I
believe,
he's
gone
now,
yeah,
it's
either
he's
about
to
leave
or
he
has
left
red
hat.
So
he
cannot.
C
Just
to
know
so
doing
don't
just
transfer
that
now,
because
it
would
be
good
to
have
someone
that
I
can
contact
or
you
know,
because
I
think
no
one
is
reviewing
this
pr
and
my
pr
now
anymore.
I
know
that
danielle
healer,
maybe,
but
I
never
talked
to
him
so.
B
Daniel
can
help
I
I
can
help.
I
I've
just
really
backed
up
on
reviews.
Let's
talk
about
it.
What
what's
going
on
you've
missed
or
it
performance
test?
That's
crypto
configuration
files.
C
C
I
was
using.
You
know
the
bootstrap
mage,
which
doesn't
have
gold
language.
You
don't
know,
and
I
need
to
change
your
golems
image
there
so
there
well.
If,
if
someone
can
help
that
in
the
next
days,
it
would
be
great,
so
we
will
have
the
performance
drops
running
and
then
we
can,
you
know
starting
to
to
have
you
know,
data
and
analyzing
results
from
the
cics.
B
Yeah,
this
is
very
complicated.
What
you're
asking
for
I'll
see
why
this
would
be
a
problem
to
merge.
It's
all
pre-isolated
as
well.
B
That's
the
problem,
yeah
right
right,
so
just
one
thing
that
I
notice
is
you
create
a
separate
automation,
script.
This
is
a
new
file
yeah.
C
Yeah,
I
didn't
want
to
mess
with
the
other
one
before
we
decide
to
replace
it.
So
that's
why
I
create
a
new
one,
but
we
can
later
think
about
if
we
should
like
replace
the
functional
test
or
not
or
just
leave
there,
we
can
discuss
that
later.
B
What
was
the
main
thing
that
you
needed
to
add
to
the
script
that
the
other
script
did
not
have.
C
Okay,
so
the
first
thing
is
it's
to
install
things
in
external
cluster:
it's
the
the
other
ones.
It's
specific
for
you
know
for
deploying
you
know
it's
deploy.
Kubernetes
creates
a
cluster,
you
know
the
main
cluster.
Obviously,
yes,
you're.
B
Talking
about
external
cluster,
all
right
got
it
yeah
interesting
and
is
this
environment?
How
does
it
get
cleaned
up
and
I
guess
who's
maintaining
it
he's
been
paying
this
environment.
This
is
something
you
maintain
or.
C
B
B
I'm
not
sure
who
so
I
want.
I
could
review
this
and
I
could
even
you
know
I
could
approve
it
and
get
in
that.
I
don't
have
a
lot
of
understanding
of
who
would
carry
this
forward
and
the
team
involved
out
there.
B
I
would
recommend,
maybe
syncing
up
a
little
bit
with
fabian,
to
understand
what
the
status
of
that
team
is
at
red
hat,
because
it's
primarily
a
red
hat,
sponsored
effort
and
try
to
make
sure
that
you're
involved
with
the
right
people
and
that
they're
aware
of
what's
going
on,
and
I
can
help
get
the
code
through
if
we
need
help
with
that.
I
want
to
make
sure
that
the
right
people
are
aware
of
the
change
and
have
a
chance
to
review
it,
and
I
don't
know
who
those
are
right
now.
A
Okay,
all
right,
let's
go
to
the
other
one
or
the
other.
The
issue
you
created
here,
so
this
is
the
marcelo
I
introduced
the
these
were
the
two
issues
that
we
were
looking
at:
to
figure
out
the
density
test
and,
what's
going
on
so
do
you
want
to
talk
to
this
one.
C
Yeah
yeah,
so
you
know
ryan
start
to
do
actually
this
analysis,
which
is
very
good,
so
thank
you,
ryan
to
actually,
you
know,
checking
those
results.
You
know
from
the
the
performance
job
that
was
running
there
and
identify
this
issue.
C
It's
basically
for
everyone
that
maybe
not
aware
of
that
is
we
run
the
performance
job
and
sometimes
some
events
is
not
being
collected
in
this
metric.
That
is
rest,
clients
requests,
which
is
a
metric
that
is
generated
inside
cooper.
C
So
it's
it's
counting,
for
example,
number
of
pods
created,
number
of
pods
deleted
and
other
events
as
well.
I'm
just
focusing
here
actually
in
delete
and
create,
and
again
so
it's
sometimes
it's
missing.
Sometimes
it's
appears,
and
I
was
just
trying
to
understand
you
know-
and
actually
I
did
a
workaround
about
that.
It's
not
a
fix.
It's
just
a
workaround.
You
know
to
see
the
metrics,
but
it
really
needs
further
investigation
for
that,
and
maybe
it's
also
good
to
to
see
the
you
know
david
options
here,
optional
and
thoughts
here.
C
Okay,
so
what's
happening
here
is
when
I
deploy
convert
using,
for
example,
clusters,
yeah
deploy
or
cluster
sync
command.
The
you
know
the
recently
built
coupe
vert
in
the
development
process.
C
I
run
a
performance
test
grades,
for
example
100
vmis
and
then
I
run
auto
tool
or
doesn't
it
doesn't
even
need
to
be
the
perfscale
or
these
two,
but
just
go
to
parameters
and
check
the
metric.
So
if
I
check
the
metric,
I
don't
see
the
deletes
and
create
pod
events.
C
Okay,
it
doesn't
appear,
however,
if
I
create
one
vmi
and
delete
one
vmi.
What
I'm
saying
warm
up
the
cluster,
you
know
just
make
cookware
function
and
then,
after
that
I
create
100
vmis
and
delete
100
vmis.
I
see
all
the
create
and
delete
events
in
the
metric.
That's.
B
C
C
Yeah,
so
it's
I
was
thinking
I
don't
know
what's
happening.
I
have
no
idea
so
synchronization.
If
it's
something
inside
the
code
or
just
prometheus
is
not
scrapping
the
metrics
well
or
something
that
david.
You
know,
ryan
was
saying:
if,
if
permits
doesn't
scrap
the
exactly
right
interval,
does
it
well?
Is
there
any
possibility
to
lose
the
data?
A
It's
not
that
the
data
should
be
there
but
like
when
you
have
it
in
the
graph
here,
like
I
mean
you
have
an
increase
so
like
it
would
be
like
if,
like,
for
example,
if
you
were
to
scrape
like
if
you
were
to
take
data
from
here
this
point,
if
you
can
see
my.
B
B
A
A
But
it's
not
like
yeah,
it's
not
like
a
scrape
from
prometheus.
I
mean
like
well.
It
would
be
like
so
like
this
value.
This,
like,
as
you
can
see
in
the
graph.
It's
it's
changing
right.
It's
changing
all
the
time!
So
yeah
I
mean
it's.
It's
whenever
yeah
it's
sort
of
whenever
you're
looking
at
the
data
and
yeah.
B
Well,
so
that
value
is
changing
over
time,
because
it's
constantly
doing
the
sum
of
a
five
minute
interval
so
we're
seeing
like
the
I
can't
quite
see
the
time
stamps
at
the
bottom.
There
all
right.
Let's
see
six
minutes,
it's
every
30
seconds,
so
it
it's
going
to
level
out
because
for
the
duration
of
the
test,
nothing
changes
and
then
it's
going
to
start
dropping
off,
because
when
you
look
back
to
five
minutes,
there's
less
and
less
results
until
it's
gone.
So
I
understand.
B
C
C
I
realized
that
it
was
because
I
was
you
know
just
after
deploying
kubfair,
because
you
know
if
I,
if
I
have
good
running
there,
I
run
one
test.
All
the
other.
Subsequent
tests
appears
the
metrics
and
it
was
buzzing
me
you
know
and
when
and
then
I
saw
that
so
I.
I
also
think
that
I
I
also
thought
that
maybe
it's
some
synchronization
and
then
I
left
it
waiting
for
12
more
than
20
minutes
and
it
still
had
the
same
problem.
So
it's.
B
Because
the
entry
doesn't
exist
yet
in
the
time
series
database
and
you
primed
it
by
creating
a
vmi.
I
bet
that's
what
it
is.
It
doesn't
exist
when
you
look
back
at
the
beginning,
so
at
the
start
of
the
interval
that
create
pod
doesn't
exist.
Actually,
so
it's
not
being
summed,
but
once
you
create
a
vmi
and
then
you
go
back
to
a
point
where
that
entry
does
exist
in
the
time
series
database,
then
it
starts
summing
it
that's
so
crazy.
B
B
B
Create
a
vmi
at
the
gain
of
the
density
test,
ensure
that
that
gets
scraped
delete
it,
and
then
we
know
only
to
collect
after
that
point,
because
the
entries
existed.
C
Yeah
in
the
in
the
scripts
that
I
in
the
new
test,
I'm
doing
that
and
I
described
that
I
proposed
so.
B
That
explains
what
you're
doing
ryan
as
well
some
of
the
inconsistencies
there
so
for
our
density
test.
We
need
it's
good,
that
you
went
ahead
and
added
the
perf
audit
to
the
golang
logic,
because
we're
going
to
have
to
do
a
precondition
there,
the
prime
prometheus,
in
order
to
look
back
accurately
at
the
results.
C
A
A
I
think
I
think
we
do
exactly
what
david
just
said
like
we
just
have
the
precondition,
create
one
vm
and
we
did
and
that's
it
and
that
should
that
should
give
us
yeah.
I
mean
that
should
tell
us
what
you
know
and
that
will
give
us.
I
think
what
we're
looking
for
here
yeah,
I
think.
That's
it.
That's
just
a
few
line
change
here.
B
B
After
we're
sure
that
that
result
has
been
scraped
too,
so
the
entries
have
to
exist
so
maybe
create
a
vmi
delete.
It
wait
a
minute
use
that
as
a
starting
point
for
our
the
scraping
interval
and
run
the
density
test
and
use
the
end,
wait
a
little
bit
and
use
the
end
one
to
make
sure
that
we've
given
enough
time
for
the
scraping
to
occur.
A
You
have
it
yeah,
the.
C
Pair
scale
yeah,
if
you
see
here,
if
all
these
two
is
it's
true,
for
example,
if
we're
running
that
line
six
three
yeah,
so
I
kind
of
warm
up
when
I
say
so.
I
run
the
the
pair
of
tests
with
the
warm-up
workload
which
just
one
you
know
one
vmi
create
and
delete
one
vmi.
C
B
C
A
C
A
B
A
B
Gonna
be
like
prometheus
experts.
C
A
Okay,
oh
my
god,
okay,
all
right
cool
that
that's
good,
all
right,
so
we've
got
we've
got
through
those
okay,
so
that
we
talked
about
this
last
time.
This
is,
I
just
wrote
a
little
work
in
progress.
Pr
for
what
I'm
hoping
will
we
can
turn
into
like
some
of
like
what
kubernetes
has
with
their
slo
document.
A
This
just
kind
of
I
most
describe
it
as
like
a
template
and
a
description
of
our
the
test
that
we
want
to
do
right
now,
like
I
don't
think
we'll
have
our
syllabuses
not
for
a
little
bit,
but
so
I
just
want
to
focus
on
testing
so
I
described
in
here.
You
know
the
test
marcel
that
we
talked
about
like
first
test.
You
know
what
it
does,
what
measures.
A
So
you
say
what
it
does,
what
it
measures
and
then
potentially
taking
you
know
when
we
have
these
we've
already
started
on
some
of
these,
so
once
we
you
know
have
some
of
these.
I
think
what
we
do
is
we
just
you
know
we
have
our
periodic
job.
We
take
like
an
aggregate
of
results
over
a
period
of
time.
You
know
we
call
that
our
threshold
and
then
we
make
our
threshold.
You
know
here
like
we
recorded
here.
A
It's
kind
of
you
know
what
we
expect
yeah
and
that's
kind
of
what
I'm
thinking
we
can
go
with
this
and
then
we
can
over
time,
expand
this
to
do
some.
A
You
know
stuff
with
the
you
know,
slos
or
whatever
just
see
how
far
we
can.
We
can
take
this
so.
B
I
see
two
steady,
take
steady
state
test
paragraphs.
A
Yeah,
so
I
I
do
I
do
like
I
talk
about,
I
just
kind
of
introduce
the
test
and
then
I
talk
about
what
it's
going
to
measure.
I
do
it
for
each
of
these.
A
Yeah,
I
just
it's
just
a
description
about
them:
yeah,
okay!
Well,
so
what
I'll
do
with
this
I'll
post
this
as
a
as
a
patch?
And
you
know
I'll
tell
you
guys
we
can
take
it
there
from
review
and
marcelo.
You
have
your
document
where
you've
written
some
thoughts.
You
know
if
you
have
some
other
things
that
I
couldn't
get
into
your
doc
for
some
reason,
but
if
you
have
other
things
that
we
want
to
have
in
here
or
things
going
to
change,
that's
you
know
it's
fine.
A
C
C
A
A
Yeah,
okay,
good,
that
was
that
topic,
and
then
we
covered
this
one
right
marcelo
or
does
that
work
for
you,
okay
and
then
says:
oh
wait!
We
did
that
one
too.
C
A
Cool
all
right,
I
don't
think
we
have
any
more
great,
all
right,
I'll
I'll
post
this
pr
after
this
meeting,
and
then
I
think
I'll
be
able
to
get
this
one.
You
know
do
some
testing
for
this
today.
I
think
this.
This
isn't
too
much
work,
so
we'll,
let
you
guys
know
I'm
kind
of
excited
to
see
what
the
rest
of
this
is
going
to
be
when
we
get
to
the
end
of
this.
Hopefully,
okay,
all
right
guys,
thanks
thanks
for
joining,
have
a
good
day.
Talk
to
you
later.