►
From YouTube: Package Group Weekly Sync 20200302
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
the
package
group
weekly
sync
for
the
2nd
of
March
2020
I'm,
pretty
confident
I
have
the
date
right
this
time
triple.
Checking
yesterday's
correct
and
I
guess:
I'll
kick
off
with
an
item
here
is
a
CI
C,
D
strategy.
Conversation
I
linked
to
this
document
pretty
much
worth
reading
through
this
will
make
an
appearance
in
our
handbook
pages
for
strategy,
I
believe
Tim.
Do
you
want
to
let
people
know
who
we
are
not
like
in
get
labs?
We
may
be
watching
this
where
they
could
find
this
at
some
point,
I.
B
Right,
yeah
exactly
so
the
basically
Jason
reviewed
with
the
key
group
last
week
last
week,
the
CI
CD
strategy
and
Direction
pages
and
got
some
feedback
and
they
took
some
notes
and
there
was
some
action
items
that
mostly
for
verify
and
released
to
take,
but
some
good
feedback
about
our
stage
as
well.
So
I
just
wanted
to
post
here.
B
A
Cool
yeah
and
I
definitely
thought
it
was
worth
reading
through
slash
watching
the
video
if
you're
interested.
But
again
this
is
resisted
of
internal
conversations
that
will
appear
in
public
for
people
who
are
following
along
I'm,
so
cool
moving
on
from
that.
If
no
one
has
any
comments
or
questions
cool
Jerome.
A
Germán
updated,
the
definition
have
done
with
a
couple
of
mrs
one:
around
coding
died,
secure
coding
guidelines
in
the
other
round
of
storage
considerations
of
performance.
This
is
something
we
should
all
take.
A
look
at
definition
done,
something
we
need
to
be
paying
attention
to,
and
people
at
home
definition
of
done
is
our
handbook
of
something
you
can
have
a
look
at,
and
this
would
be
applicable
to
community
contributors
so
worth
taking
a
look
at
thoughts,
questions
OOP
and
over
the
to
Nick.
C
Yep
so
I
noticed
last
week
that
the
code
review
survey,
insights
was
posted
and
the
big
sort
of
headline
at
least
a
package
is
that
it
seems
that
we
have
quite
or
the
majority
negative
sentiment
towards
the
code
review
process.
So
I
just
thought.
Perhaps
we
should
talk
about
that
and
figure
out
why
it's
negative
and
what?
If
any,
action,
we
should
be
taken
as
a
team
to
address
that
negativity
and
like
move
forward.
So
what
I?
Originally
added
this
to
the
agenda?
C
C
Black
and
white,
and
so
I
was
fairly
opinionated
as
to
what's
been
blocking
now
I
in
my
head,
I'm,
pretty
lenient
guy
in
the
sense
that
if
the
code
is
readable
and
it
isn't
a
mess,
then
I
don't
feel
that
I
should
be
getting
involved
and
making
changes,
especially
the
seeing
or
enforcing
my
opinion,
especially
outside
of
like
package.
So
if
I
was
to
review
somebody
else's
code
and
they
might
have
a
different
style
to
me,
then
I
wouldn't
necessarily
comment
on
that
or
like
block
a
review
on
that.
C
Wonder,
like
part
of
me
wonders
when
I
think
about
this
in
the
broader
sense,
and
especially
my
own
experiences
as
well,
so
I'm
training
to
become
a
maintainer
and
the
types
of
reviews
that
I'm
getting
or
a
lot
of
the
mrs
I'm
getting
a
pretty
simple
in
the
sense
that
there
isn't
much
to
suggest.
But
I
mean
I'm
never
going
to
become
a
maintainer
unless
I'm
like
providing
feedback
and
can
demonstrate
that
my
knowledge
is
good
enough
to
be
that
worthy
of
that
of
maintain.
C
As
you
know,
so
I
feel
that,
in
certain
respects
of
a
review
process
really
does
encourage
this
kind
of
opinion
based
nitpicking
of
code,
because,
if
you're
looking
at
reviews
and
there's
not
really
much,
you
can
say
you're
never
going
to
become
a
maintainer.
You
know
you
using
your
interest
as
a
as
a
trainee
maintainer
to
find
something
wrong
with
a
review
to
demonstrate
your
knowledge.
C
If
there
isn't
anything
obvious
for
this,
then
it
could
be
quite
hard
to
to
prove
this,
and
this
is
like,
at
least
in
my
experience-
is
that
I'm
not
getting
a
lot
of
reviews
per
week,
I'm
perhaps
getting
two
or
three-
and
this
was
before
me
and
Nikko-
were
handing
reviews
like
Emma's
back
and
forth,
at
which
point
we
did
sort
of
accelerate
how
many
reviews
we
were
doing
in
it
in
a
week
it's
time.
So
a
lot
of
my
feedback
was
based
on
that.
D
And
actually
kind
of
echoing
on
that,
a
little
bit
I
would
tend
to
agree
that
I
have
experienced
a
lot
of
sort
of
you
know,
code
opinions
that
aren't
really
documented
in
the
developer,
Docs
of
like
oh,
we
should
format
a
test
in
a
certain
way,
or
we
should
you
know.
You
know
like
little
cosmetic
sort
of
things
and
I
would
agree
that
if
we
are
going
to
have
some
sort
of
a
like,
you
know
everyone
is
going
to
follow
these
rules.
D
It
should
be
enforced,
I'll
inter
or
written
down
in
the
dock
somewhere,
probably
because
that
would
be
a
more
reasonable
way
to
say
like
oh,
we.
This
is
how
we
do
things,
so
that
might
be
something
worth
looking
into
you
on
my
side.
I
was
thinking
more,
not
necessarily
with
the
nitpicking,
but
with
whether
or
not
we
should
be
using
roulette
and
a
very
mixed
opinions.
I've
discussed
them
with
a
few
of
you,
and
so
just
three.
What
I
wrote
here
really
is
that
you
know
doing.
D
Reviews
within
our
team
helps
spread
knowledge
within
our
team,
but
it
does
take
away
from
the
rest
of
gitlab.
So
you
know
it's
nice
that
we
share
our
reviews
with
each
other.
But,
like
you
said,
you
don't
get
many
reviews
as
a
trainee
maintainer,
and
some
of
that
might
be
because
you
come
up
as
the
reviewer,
but
then
someone
chooses
their
own
teammate
instead.
So
it's
kind
of
this
double-edged
sword
of
like,
are
you
being?
D
I
was
thinking
that
if
NMR
is
big
or
complex
enough
to
warrant
sort
of
that
domain,
expertise
or
the
need
to
want
to
share
some
knowledge,
perhaps
instead
of
just
having
the
first
review,
be
one
of
our
own
group
members.
We
would
have
a
separate
review,
dare
I,
say
third
review
extra
review
where
the
package
member
isn't
reviewing
the
quality
of
code
or
anything
like
that,
but
purely
just
reviewing
the
approach
and
verifying
that
the
solution
makes
sense.
D
So
really
just
a
domain
review
and
so
I
think
that
would
still
benefit
our
team
and
spreading
knowledge.
But
it
shouldn't
really
take
much
time
or
block
the
mr,
especially
since
that's
the
kind
of
thing
that
can
happen
while
the
EM
are
still
being
worked
on
and
then
at
in
doing
that
additional
sort
of
review,
it
still
allows
us
to
use
the
roulette,
for
you
know
benefiting
all
the
rest
of
the
people
that
show
up
in
our
trainee.
F
Yeah
I'll
go
ahead.
Isn't
is
this
something
that
could
probably
done
with
like
a
code
owner
file
and
approval
just
so
that
you
have
to
you
know
somebody's
identified
as
a
domain
expert
here
and
that
you
have
to
get
their
approval
of
the
EMR
and
that
could
be
like
the
overall
direction
of
the
EMR
rather
than
them
actually
doing
a
review
and,
looking
you
know
at
a
fine
detail,
sort
of
thing,
I.
D
D
E
Okay,
so
I
was
looking
at
the
the
full
survey,
not
just
the
part
regarding
package
and
for
note-taking
I
will
let
this
later
in
the
document
as
well.
I
think
we
need
to
take
in
consideration
two
things:
the
service
is
not
being
done
by
the
food
company,
at
least
definitely
not
from
all
the
individual
contributor,
and
second,
there
is
another
piece
of
data
that
I
think
is
really
relevant.
E
Is
the
fact
that
the
most
of
people
that
have
a
problem
with
the
review
process
is
people
that
are
between
three
months
to
one
year
in
the
company
and
this
first
of
all,
aligned
with
the
fact
that
are
staged
as
a
bad
opinion,
because
that's
basically
the
full
of
all
theme
right.
If,
if
not,
that
is
close,
and
the
second
thing
I
think
this
suggests
that
people
are
getting
get
used
to
this
kind
of
review
process
right
and
and
getting
use.
It
can
be
the
positive
or
negative.
E
F
I
was
just
thinking
that
that's
the
pattern
of
company
longer
and
we're
not
reading
the
review
process
as
low
as
perhaps
that
they
fishing
survivorship
bias
like
if
it's
so
you're
these
people
might.
If
they
really
had
a
problem
with
Ruby
system,
they
might
have
moved
on,
they
might
have
been
generally
unhappy
with
the
company
and
moved
on
somebody.
Who's
been
here
for
longer.
F
A
Yeah,
okay,
so
I
think
I
think
probably
worthwhile
having
more
of
an
in-depth
discussion
about
this,
and
they
create
an
issue
in
the
in
the
team
page.
So
we
can
sit
down
and
try
and
do
an
RCA
on
on
this
and
what
the
causes
are
I
feel
like.
This
is
a
really
worthwhile
discussion
and
I
feel
like
I
want
to
have
Gigi's
input
in
this
as
well,
and.
B
G
A
Think
this
totally
subjective
nature-
this
just
to
just
to
you
know,
be
real
clear
about
one
thing:
I
was
encouraging
actively
people
to
hand
reviews
to
each
other
in
order
the
sort
of
triage,
the
the
feeling
that
this
was
just
thanking
people's
reviews
were
taking
weeks,
which
was
just
unacceptable,
I.
Think
at
the
point
we
all
sort
of
start
getting
a
handle
on
what
we're
trying
to
do.
Then,
then,
those
sort
of
concerns
that
Nick
you
originally
raised
around.
A
What
feels
like
really
nitpicky
or
opinionated
sort
of
approaches
it
needs
to
be,
then
you
know
sort
of
adhered
to
before
you
get
something
emerged
to
me
what
I
hear
there
is.
You
know
an
example
of
something
where
it's
like.
No,
this
is
not
a
problem
with
the
code,
whereas
this
in
the
handbook,
that's
how
a
response-
and
if
not
I'm,
going
to
submit
it.
A
You
can
create
an
issue
to
correct
or
something
like
that,
like
it's
I,
don't
want
people
to
be
gated
around
that
unless
it's
actually
a
significant
problem,
but
these
are
sort
of
things
that
I
think
should
be
rolled
up
into
the
handbook
to
sort
of
say
in
an
opinion
based
scenario.
They
shouldn't
be
like
a
gate
keeper
they're
saying,
although
that's
the
intention
of
a
maintainer
right,
they
are
a
gate
keeper,
that's
the
kind
of
role.
A
I
think
that
we
we
had
been
talking
about
the
idea
of
sort
of
saying
what
the
expectations
or
what
could
be
considered
non
subjective
or
non
opinionated
reasons
to
hold
up
a
code
review
or
maintain
a
code
review
sort
of
documenting
that
to
be
clear
on
what
those
things
were,
but
this
I
think
there's
a
lot
of
different
variables,
but
I'll
go
and
create
a
an
issue
so
that
we
can
all
talk
about
it
and
we
can
properly
take
it
there
a
little
bit
and,
if
necessary.
Let's,
we
can
have
a
separate
conversation
about
this.
A
If
we
think
that
that's
going
to
be
helpful,
the
people
feel
like
that's
an
OK.
Actually
I.
Don't
know
this
I
don't
want
to
seem
like
I'm
dismissing
anyone's
concerns
at
all,
because
I'm
certainly
not
I,
think
this
is
one
of
the
things
I've
been
focused
on
solving
as
well.
Is
that,
okay,
with
everybody,
thumbs-up,
thumbs-down
cool,
thank
you
and
we
just
realize
it
so
they're
coming
up
to
time
and
I
wanted
to
make
sure
we
were
taking
something
away
from
this.
A
Does
anyone
else
on
the
call
want
to
say
anything
about
the
review
process
or
feel
like
something
should
be
added
here,
since
this
might
be
the
only
chance?
But
some
people
think
this
conversation
if
they
happen,
pon
the
package
group
meeting
on
YouTube
or
they're
actually
reading
through
the
notes
or
something
yeah.
F
So
one
of
the
things
that
I
remember
reading
in
the
handbook
about
sort
of
this
kind
of
thing
where
it's
like
I,
have
a
problem.
It's
kind
of
stylistic
that's,
not
necessarily
the
shoul
get
lab
style
is
to
protect
to
do
like
nitpick.
Our
non-block
I
think
it
specifically
like
non-blocking
in
front
of.
Like
the
your
comment.
Just
to
say
like
this:
is
my
preference
like
I?
Prefer
the
I
prefer
the
braces
on
the
next
line,
but
we
don't
have
a.
F
A
So
that
might
be
nice
if
we
could
sort
of
build
those
habits,
the
people
to
sort
of
say
to
front-load
that
comment
where
they
optional
or
subjective,
or
you
know
opinion
or
whatever
it
is,
that
they're
front-load
it
with,
and
then
you
can
sort
of
start
that
as
a
sort
of
a
process
for
people
to
follow,
and
that
will
become
a
habit
at
some
point,
but
you
still
have
still
potentially
have
that
scenario
where
people
like
no,
no.
This
is
legit
and
you're
like,
but
no
it's
not
right,
I
need!
A
So
then
how
do
you
sort
of
work
that
conflict
down?
And
that
sounds
more
like
what
Nick
was
sort
of
communicating
at
the
start
right
I've
like
someone
going?
No,
no,
you
have
to
do
this
and
it's
like
really
like.
Yes,
my
opinion
is
important
and
I'm
not
even
talking
about
anyone
in
particular.
Obviously,
that
was
how
I
heard
it
characterized
I
shouldn't
be
making
jokes
I
apologize
cool.
E
Have
a
question:
is
there
a
way
to
extract
data
around
this?
Instead
of
conducting
a
survey,
I
mean
everything
is
in
our
kit
LeCompte
database
right.
So
we
already
update
about
like
meantime
to
merge,
but
maybe
we
could
extract
like
voice
that
we
are
review
where
of
every
Mar
like
a
same
department,
same
stage
or
age
of
experience
and
like
look
at
the
data
instead
of
feelings
as
well.
A
A
If
I'm,
honest,
I
didn't
even
know,
we
were
getting
this,
and
this
is
the
first
time
seeing
this
I
was
like
oh
really
and
I,
probably
just
missed
it,
someone
because
I
was
distracted
on
Friday
or
whenever
we
were
doing
it,
but
that's
on
me
so
I
apologize
that
I'm
not
as
read
up
on
this,
as
maybe
that
could
be
so
I'll
follow
up
with
the
idea
of
getting
more
data.
I
think
I.
Think
in
this
context,
I
will
have
action
items
coming
out
of
this.
A
This
whole
survey
anyway,
and
so
this
will
be
certainly
one
of
them
or
a
bit
of
an
outlier.
There
is
everyone
brightly
identify,
so
cool
all
right,
I'm
gonna
go
create
the
issue.
I'll
follow
I'll,
add
an
item
into
the
staff
meeting
so
that
we
can
ever
talk
about
more
data
and
I'll
mention
it
to
Darby
as
well
so
and
just
as
a
reminder
everyone
can
host
and
that
staff
meeting.
It's
not
that's.
The
videos
are
uploaded
anyway,
but
they
go
into
the
Google
drivers
normal.
A
G
A
Cool
thanks
to
sharing
that
David.
That's
awesome,
appreciated,
alright,
I
think!
That's
all
we
have
on
the
agenda
right
now.
So
if
everyone
can
hang
around,
we
do
have
a
not
recorded
item
that
we
need
to
discuss.
Apologies
for
the
people
trying
to
follow
along
at
home,
but
we
do
have
some
things
that
are
not
public
and
that's
explained
in
the
handbook.
So
please
hang
around
after
this
and
I'm
going
to
stop
the
recording,
but
thank
you
very
much
for
everyone.
I
hope
everyone
enjoys
the
rest
of
the
day.