►
From YouTube: 2021-10-13 Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
everyone
welcome
to
the
october
13th
code
review
weekly
animal
you've
got
the
first.
If
you
want
to
read
it.
Otherwise
we
can
pass
to
andre.
B
No
mine's
just
an
fyi
if
you
want
to
look
at
the
competitor
evaluation
for
suggested
reviewers,
it's
the
issue
is
closed
and
there's
a
walkthrough
video
if
anyone's
interested
and
moving
forward.
I
will
come
up
with
some
initial
designs
for
how
we
can
integrate
on
review.
C
Sweet
thank
you
I'll.
Take
a
look
at
that
link,
so
this
is
tommy's
point.
I
had
a
101
with
him
yesterday
and
I'm
presenting
him
presenting
his
point.
So
this
is
something
that
he
would
like
to
try
out
with
the
code
review
group.
So
basically,
this
is
something
that
was
created
by
someone
in
the
quality
team,
sophia
and
they've
been
trying
it
out
for
other
groups,
and
what
this
is
basically
is
a
way
for
the
groups
to
identify
their
risks
and
with
the
collaborating
effort
being
able
to.
C
Maybe
I
can
share
my
screen
in
just
a
second
you'll,
be
able
to
collectively
build
sort
of
a
map.
Oh
sorry,
sort
of
a
map
of
wait,
where's
the
link
he
had
a
link
for.
C
I
should
have
prepared
this
better,
so
right.
So
basically,
this
is
a.
This
is
the
engine
that
powers
that
website
and
it
generates
sort
of
a
list
of
risks
ordered
by
impact
and
probability.
C
This
is
the
impact
and
the
probability
of
recurrence,
and
then
it
kind
of
gives
us
the
whole
group
a
hint
about
what
are
the
risks
that
we
should
be
looking
into?
Is
it
lack
of
coverage
in
tests?
Is
it
unknowns
that
we
might
not
really
get
the
grasp
on?
Is
it
you
know?
Some
other
group
was
storage
costs,
for
example,
so
this
allows
us
to
collaborate
in
categorizing
the
risks,
the
identity,
identifying
the
risks,
but
then
coming
up
with
a
really
easy
to
pre
to
consume
way.
But
let
me
just
find
the
link.
C
Andre
you
did,
I
think
it's
the
second
one
yep,
that's
the
one
save
today.
Thank
you
right.
So
that's
how
it
looks,
and
then
inside
this
folder
you
can
list
basically
the
the
efforts
that
are
ongoing
and
the
statuses
milestones
and
everything
then
links
to
the
issue.
So
this
is
all
generated
automatically.
All
we
have
to
do
to
use.
This
is
just
to
create
issues
on
a
certain
tracker.
We
categorize
it
with
a
certain
labels
and
then
the
app
does
it
does
the
rest
right
it
kind
of
like.
I
think.
C
The
way
that
he
explained
is
that
the
priority
is
calculated
by
multiplying
the
impact
versus
the
other
probability,
and
then
it
gives
us
like
the
the
priority
results.
So
his
point
here
was
just
he
was
proposing
setting
this
up
for
code
review
and
he
wanted
to
hear
your
thoughts
about
this.
If
you
have
anything
in
mind
about
this,
whether
it's
a
good
idea,
a
bad
idea
should
we
go
for
it.
C
A
I
don't,
I
don't
understand
how.
C
It
works
okay,
so
maybe
I
can
do
a
better
job
at
explaining,
so
the
the
code
that
that
he
linked
here
the
tool
itself
is
just
the
jekyll
that
consumes
the
issues
on
the
same
on
the
project
that
we
define.
C
Then
we,
author,
basically
issues
on
the
issue
tracker,
creating
like
a
risk
saying:
let's
see
what
will
be
a
risk
and
code
review.
C
Not
understanding
a
problem
certain
something
like
that
would
create
an
issue
or
reviewers
not
getting
picked
or
something
when
they
get
a
request
for
attention,
and
we
would
identify
the
impact
of
that.
We
would
classify
how
often
without
you
would
that
happen,
and
then
the
tool
itself
would
aggregate
sort
of
and
would
aggregate
the
efforts
in
in
flight
to
address
that
risk.
There
are
ways
to
assign
that
I'm
not
entirely
sure
how
you
assign
issues
to
to
risks,
but
I,
I
think,
there's
a
way.
Yeah.
A
A
A
I
don't
know
something
slow
end
points
which
we
certainly
have,
but
we
also
like
already
have
priority
and
severity
labels
that
are
like,
but
I
guess
like
it's
not
clear
to
me
like.
Why
would
we
add
a
third
way
to
sort
of
classify
things
that
we
already
have
ways
to
classify
and
understand
priority
for
the
like
things
that
I
can
think
of
off
the
top
of
my
head?
A
So
I
I
would,
I
guess
my
feedback
would
be
like
if
you
can
come
back
and
say
here's
a
list
of
issues
that
might
make
sense
to
do
this.
For
then
like
we
can
have
a
different
conversation
right
now.
I'm
just
not
sure
like
this
feels
like
another
input
for
inputs.
We
already
have.
C
In
terms
of
like
examples
of
risks
and
stuff,
I
can
think
of
a
couple
of
like
technical
debts
of
complexity
of
the
code
of
performance
deteriorating
by
adding
new
features.
So
those
are
all
risks
that
we
can
classify
and
then
contextualize.
Certain
efforts
to
address
that
test
coverage
is
another
one.
Where
we
have
some
test
coverage
in
certain
parts,
we
don't
have
other
parts.
A
Yeah-
and
I
think
the
things
you've
mentioned
are
good,
I
guess,
then
my
question
would
be
like
do
we
have
issues
for
those
already
and
do
they
have
like
priorities
and
severities
and
are
we
calling
them
bugs,
or
are
we
calling
them
tech,
debt
or
like?
Why,
like
you
know,
how
are
we
already
classifying
those
things
or
are
we
not
if
the
flip
side
is
is
like
we
don't
even
have
any
of
those
things
documented
then,
like
maybe
that's
a
different
conversation
and
then
like.
We
need
to
do
those
things
too.
C
What
is
the
risk
impact
and
what
is
the
risk
probability
and
specifically
about
the
one
about
performance
deteriorating
by
adding
new
features?
What's
the
impact
of
that
it's
a
very
popular
page
in
our
group?
What's
the
probability
of
that?
We
can
think
about
how
stable
our
code
is,
and
we
can
relate
that
to
all
the
risks
in
the
group
and
and
that
can
help
us
kind
of
like
focus
on
the
most
important
ones.
And
if
we
get
we
don't
we
won't
get
blindsided
by
something
that
it's
like
invisible.
C
But
it's
creeping
up
on
us,
I
think
so
yeah.
I
don't
see
an
objection
from
my
part
on
the
on
trying
this
out
also
because
there's
not
that
huge
of
an
overhead
cost
it's
something
that
he
gets
to
set
up
on
his
side
and
we
just
have
to
fill
it
in
with
our
ideas
and
participate.
Basically,
if
we
have
any
but
I'll,
I
think
that
he
can
come
up
with
a
couple
of
examples
for
us
to
discuss
it
further.
D
Yeah,
I'm
willing
to
try
it,
I'm
still
a
little.
I
have
to
think
more
about
how,
like
I
was
saying,
what
the
what
are
some
of
the
actual
risks,
and
how
would
we?
How
would
we
use
this
tool
in
our
planning
process
or
is
it?
Is
that
the
intent
of
it?
Is
it
a
like
a
planning
tool
like?
Oh,
this
is
a
top
priority
and
we
don't
have
these
scheduled
yet
to
these
specific
issues
or
is
it
I'm
not
really
sure.
C
I
think
this
the
utility
of
it
would
be
cross
group
I
mean
it
would
be
also
useful
for
the
quality
team
to
oversee
different
efforts
that
are
ongoing
and
cross
before
between
the
quality
team,
the
product
team
and
all
that
stuff
from
our
side.
I
can
totally
see
us
using
this
as
another
source
of
input
for
the
planning,
but
I
don't
feel
like
we
need
this
to
make
the
plan.
C
It
just
makes
our
plan
a
little
bit
better
because
we'll
be
able
to
identify
risks
sooner,
but
it
kind
of
like
flips
a
little
bit
the
thought
process,
but
yeah
yeah.
C
That's
it
right,
so
I'll
I'll
relay
that
to
him
so
that
he
can
come
up
with
because
he
knows
the
problem
better.
He
knows
the
solution
that
sophia
built
better
than
than
we
at
this
point,
and
he
could
probably
translate
what
he
sees
on
their
groups
over
to
us,
but
that's
already
a
fair
assessment
to
specify
what
kind
of
risk
would
be
identifying
it
cool.
That's
it
then,
okay
get
the
next
point.
A
Yeah
I
was
bringing
number
three
is
like
an
fyi.
We
talked
about
this
I
scrolled
down
to
go,
find
it
again.
I
was
re-pinging
both
of
you
on
this
is
another
effort
out
of
quality,
where
they're
asking
us
to
pilot
and
test
something,
so
we
just
need
to
respond
and
figure
out.
A
If
we
want
to
do
this
and
what
the
impact
would
potentially
be
to
the
group
and
just
for
clarity,
the
proposal
is
that
they
would
run
end-to-end
tests
on
all
of
our
groups,
mrs,
which
would
create
pipeline
failures
if
enter
and
test
fail,
and
therefore
you
would
have
to
fix
into
in
tests
as
part
of
your
regular
work
stream.
A
To
me,
this
feels
like
more
problematic
potential
in
the
front
end.
I
know
we've
broken
into
a
test
recently
and
phil
was
like
I'm
not
even
phil
was
sort
of
like
I
don't
know
how
to
fix
that.
So
you
know
I've
got
some
concerns
there
that
like
we
could.
We
could
run
into
it,
but
I
understand
what
they're
trying
to
do.
I
just
don't
know
if
we
want
to
if
we
want
to
be
a
guinea
pig
here.
C
The
one
thing
that
I
remember
is
that
we've
had
a
situation
in
the
past
where,
by
breaking
the
qa
smoke
job,
we
we
were
able
to
merge
the
merge
request.
Then
it
caused
the
situation
where
we
couldn't
deploy
the
staging
or
it
couldn't
apply
to
an
environment
or
something.
So
this
is
very
important
because
it's
something
that
very
easily
escapes
our
site,
because
if
this
breaks
later
after
merge,
this
essentially
breaks
the
deployments
and
they
have
to
quarantine
the
tests
and
all
that
stuff.
C
C
So
if
anything,
I'm
probably
going
to
ask,
can
we
make
that,
depending
on
an
extra
label,
not
just
a
group
label
but
like
opt-in,
voluntary
label
where
the
reviewer
will
be
applying
the
label
when
they're
running
the
pipeline
and
then
seeing
from
there,
because
it's
useful
for
us
but
to
make
it
on
every?
Mr
on
every
version
of
the
merger
graph
being
pushed
feels
like
overkill
for
now,
because
we
don't
have,
we
don't
have
to
run
it
on
every
change.
We
do
most
like
when
all
right,
it's
ready
for
review.
C
A
C
C
C
If
you
can't
fix
it,
but
at
least
we
avoided
breaking
something
in
production,
so
I'm
gonna
add
my
thoughts.
There
thanks.
A
Yeah
just
add
your
thoughts.
There
figure
out
what
you'd
like
to
do
and
we
go
from
there.
A
The
next
one
is
like
an
fyi
questions
comments
I'm
still
trying
to
get
clarity
on
it,
but
I
think
it
is
as
best
I
can
tell
it
is
confirmed
that
back-end
is
re-entering
100
engineering
allocation,
although
no
one
has
technically
applied
matt's
suggestion
or
responded
for
what
that
means
for
our
sort
of
other
work
ongoing.
D
I
think
so
they
did
apply
that
suggestion
this
morning.
So
that's
on
there
now
so
at
least
we're
in
the
mr,
but
the
rmr
hasn't
been
merged,
yet
at
least
last
I
checked
but
yeah.
They
have
not
responded
to
kai's
questions
yet
so
well,
but
as
far
as
I
know,
we're
going
to
be
all
in
on
security
issues.
A
And
I
think
for
andre,
the
impact
will
be
to
front-end,
which
we
talked
about.
We've
talked
about
some
of
this
week
already.
Is
that
we're
gonna
have
to
find
things
that
don't
require
a
back-end
for
planning
or
we're.
Gonna
have
to
look
at
issues
that
are
cross
section
cross
product
outside
of
our
group,
that
we
need
to
go
work
on
or
that
we
want
to
go
and
tackle
our
larger
efforts
in
that.
So
we
just
need
to
think
about.
C
A
C
So
I'm
finalizing
the
the
capacity
for
14.5
is
it
by?
When
do
we
need
to
finish
the
planning
tomorrow
or
monday,
or.
C
A
C
A
Kickoff
is
the
18th,
which
I
guess
means
planning
needs
to
be
done
like
today,
but
I
don't
think
that's
gonna
happen
yeah
and
I
don't
think
it
tomorrow
feels
funky
and
friday's
friends
and
family
day.
A
So
on
the
flip
side,
I
don't
think
we'll
have
much
for
kickoff
unless
we
are
gonna
go
tackle,
they're
waiting
for
work,
but
we,
if
we
are
then
like
we
sort
of
already
know
that
and
that's
sort
of
probably
the
big
thing
will
kick
off
and
then
otherwise
everything
else
is
we
can
we
can
slot
in
and
figure
out.
I
think
from
a
backend
perspective,
it's
not
even
clear
how
engineers
will
get
assigned
to
issues
outside
of
groups.
Yet
I
think
dennis
is
supposed
to
like
figure
that
out
too
so,
like.
A
C
I
can
give
you
this
that,
from
our
perspective-
and
I
talked
to
to
phil,
you-
and
I
told
you
this
yesterday-
that
from
our
side
we're
perfectly
okay
in
tackling
that
request
for
attention
or
whatever
you
call
okay.
What
are
we
calling
that
effort
again,
I'm
so
mixed
up
requesting
attention
or
waiting
it
waiting
for.
C
I
think
it's
called
waiting
for
waiting
for
right.
So
the
front
end
is
comfortable
in
picking
that
up
and
even
if
he
does
have
some
back-end
involvement,
phil
stepped
up
and
he's
willing
to
take
it
on
always
looking
in
so
it
doesn't.
You
won't
need
a
capacitor
from
back
and
accept
reviewing.
C
So
in
that
sense,
I
feel
like
it's
fair
enough
to
include
that
in
the
kickoff
that
we're
gonna
start
pursuing
the
initial
steps
of
that.
What
those
initial
steps
are,
specifically,
it's
probably
gonna,
be
behind
a
feature
flag
to
be
safe
and
everything
so
yeah.
So
from
our
side,
I
feel
like
it's
safe
to
announce
that,
regardless
of
the
final
selection
of
issues,
because
the
working
group
again,
we
don't
have
a
deadline
for
the
quarter,
it's
a
until
the
end
of
quarter.