►
From YouTube: Quality Group Conversation (Public Livestream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
A
Some
of
these
are
an
iteration
from
the
the
boards
and
the
board
records
meeting,
and
these
are
essentially
the
the
highest
party
items
they're
more
in
the
backlog
that
I
didn't
capture
in
this
GC
slides
different
teams
are
tackling
it.
If
you
look
at
the
okay,
are
the
dev
side
of
things
in
quite
a
department
are
active
tackling
the
test,
readability,
tests,
efficiency,
the
metrics
are
being
tackled
by
the
engineering
productivity
team
and
test
reports,
and
making
tests
easy
to
debug
is
tackled
by
the
ops
and
see
ICD
and
engineers
and
that
sub
team.
A
A
A
C
C
A
You
yes,
so
we
do.
We
have
lost
some
candidates
actually
because
of
the
comp
mismatch.
We
do
take
into
account
and
document
it,
but
sadly
we
say
goodbye
them
at
this
point
because
we
can
pay
at
that
range,
and
these
are
people
that
have
moved
from
San
Francisco
and
they
have
the
s
that
title
but
they're
in
they
have
now
moved
to
like
the
East,
Coast
or
other
of
the
areas
of
the
United
States.
So
next
iteration
would
be
maybe
a
revised
on
the
benchmark.
A
How
I'm
engaging
Brittany
and
Erica
Chan
salon
that
and
the
next
iteration
is
I'm.
Looking
to
see
the
title
change
but
iterate
on
that,
without
jumping
too
far
from
the
numbers.
C
A
Thank
you
about
that.
We
just
talked
about
this
with
remian
and
I
when
I
went
this
morning,
so
we're
looking
to
enable
insights
for
every
project
under
the
get
lapto
org
group
and
get
feedback,
so
the
team
conduct
with
it
we're
looking
to
lift
the
feature
flags
sometime
in
Toledo
and
and
even
a
company
with
with
a
blog
post
at
contribute.
A
If
this
code
churn
is
often,
why
is
it
such
but
we're
tackling
it
from
a
good
lab
use
case
perspective
like
throughputs
and
such
so
once
we
have
lift
the
flag,
I
think
we're
gonna
start
creating
dashboards
in
the
native
insights
itself,
so
in
the
drop-down
at
the
kid
level,
org
have
a
plan,
create
manage
and
maybe
even
a
sub
department
high
level,
but
that
depends
on
an
issue
that
will
allow
us
to
them
narrow
down
the
number.
So
you
create
a
chart
limit.
This
is
just
these
projects
that
you're
looking
for
right
now.
A
Also
I
applied,
there
are
very
little
links
on
slide
14.
This
is
fresh
off
the
oven
from
contribute.
We
had
a
round
table
and
how
to
improve
community
community
contribution
experiences,
and
this
is
a
working
doc.
I'll
be
creating
an
epic
and
also
finding
our
issues
for
us
tool
to
make
improvements
here.
But
high-level
summary
is
that
we
do
want
to
have
some
kind
of
a
coaching
mechanism.
If
there
are
a
lot
of
first-time
contributors
that
just
go
away,
they
might
need
help
so,
instead
of
just
letting
them
struggle
give
the
option
to
hey.
A
Do
you
want
us
to
take
it
over
and
then
use
the
EMR
assignee
as
the
reviewer?
So
because
first-time
contributors
are
reporters,
they
don't
have
any
Commission's
to
be
assigned
to
any
mr
by
default,
so
we
can
reuse.
The
EMR
signee,
as
our
reviewer,
who
is
responsible
for
reviewing,
is
mr
and
shepherded
across
and
there's
a
bunch
of
discussions
around
there
as
well
I
mean
in
the
doc,
so
apologies
they're
less
less
things
here
and
we
want
to
open
the
marco
channel
and
leverage
Gitter.
D
A
So
the
model
for
CI
is
flaky.
Tests
is
worse
than
no
tests
and
it's
a
lot
of
work
on
a
swab
right
now,
but
for
every
flaky
test
we
move
them
into.
Quarantine
is
still
being
run
in
a
separate
pipeline,
but
it's
not
creating
noise
in
the
state
of
the
EMR,
so
that
helped
a
lot.
There's
still
a
lot
of
work
on
D
quarantine
in
it,
and
so
that's
that's
the
model
that
we're
going
that's
industry
standard
for
for
CI,
C
D.
A
We
do
want
to
automate
some
of
this
later
in
the
next
iterations,
where,
if
it
starts
to
fail,
then
the
block
will
go
and
automatically
quarantine
and
create
an
issue
for
us
and
we
will
just
go
in
and
quarantine
it.
So
those
are
the
building
blocks
to
do
to
get
us
to
a
clean
CI
result.
I
can
talk
about
a
few
more
iterations
here,
it's
it's
not
in
the
greens.
Yet
there's
still,
we
saw
some
failures
while
we
are
contributing
as
well
so
a
lot
of
mental
hand-holding
on
our
side.
B
Met
guy
should
know
the
answer
this
and
I
don't
so
that's
not
a
great
way
to
start
the
conversation,
but
on
slide
five
you're
talking
about
meeting
bug
s
LA's,
you
know
actually
define
what
the
SLA
is
for
bugs
and
I
was
curious.
A
What
they
were
they
are
in
the
they're
in
the
documentation.
It's
it's
hard
to
find
right
now,
because
the
documentation
is
it's
working
on
making
it
better
but
they're
in
the
definition
for
our
workflow
labels
and
it
spells
out
what
is
the
SLA
for
P
1
to
P
4,
and
we
share
the
same
as
the
life
or
for
security
spell
out
there.
A
A
We
do
want
a
closer
collaboration
with
the
support
department,
because
this
one
is:
is
public
I,
don't
want
to
dive
into
explicit
numbers
here,
but
when,
when
somebody
posts
a
send
us
link
to
to
an
issue,
I
originally
thought
that
we
need
to
escalate
all
the
time.
That's
not
the
case,
because
the
sender's
issues
also
contain
them
non
paid
customers.
A
That
is
one
the
GDK
I
wish.
We
had
more
time
to
work
on
the
GDK
I.
Think
we're
you
absurd.
The
next
thing
and
thing
Remy
is
working
on
I.
Think
the
success
rate
is
now
in
the
90s.
Ninety
percent.
Ninety
percent,
so
we're
gonna,
make
that
a
soft
gate,
but
the
GDK
it's
probably
more
neglected.
Unfortunately,
in
this
past,
I
would
say
quarter
so
I
wish
we
had
time
to
work
on
improving
the
GDK
and
make
it
more
stable
and
the
UX
team
is
also
using
the
GDK
to
preview
their
changes.
A
C
If
I
may
add
that
that
that's
a
really
good
conversation
and
we've
had
similar
ones
on
the
deaf
side,
it's
gonna
be
really
important
to
really
like
this
is
not
just
a
quality
team
initiative,
and-
and
nor
should
it
be,
the
dev
teams
are
also
invested
in
helping
with
this.
We
need
to
have
the
quality
gates
up
and
running
consistently,
so
that
when
we
go
and
turn
on
CD,
we
have
confidence
in
every
mr
making
it
to
production.
C
A
A
B
B
Feel
like
I'm,
asking
all
the
questions
today,
mek
le
I
myself
one
work,
so
we
talk
on
slide
nine
about
not
requiring
just
playing
issues
by
default,
the
scoober
changing
kind
of
our
our
process
around
that.
What
I'm
curious
is
is:
are
we
doing
anything
in
anticipation
that
that
may
affect
our
quality,
or
you
know?
How
are
we
keeping
an
eye
on
that
for
the
episode
by
the
way?
I
support
that
initiative
just
want
to
make
sure
that
we
feel
like
by
removing
something
that
that
we're
not
a
affecting
quality
too
much
sure.
A
A
However,
we
do
want
to
make
sure
that
there's
some
coverage
of
test
plan
in
big
changes,
for
example
like
the
rugged
patches
at
the
performance
patch,
and
if
we
are
upgrading
rails
or
upgrading
ruby,
there
has
to
be
a
test
plan
for
it
and
there's
a
whip
on
anymore.
I'm
working
on
right
now
for
that
so
yeah,
that's,
that's
explains
the
reason.
A
A
A
F
A
So
security
I
am
leaning
on
Kathy
for
for
this
effort,
but
if
we
need
Coralie
quality
to
be
involved,
please
let
us
know
I
I'm
not
looking
at
it.
This
is
specifically,
is
there
anything
that
I
should
be
looking
at
people
to,
let
me
know
are
the
Bucks
security
related
parts
or
are
they?
Are
they
functional
bugs
I.
F
Would
guess
that
we
would
file
them
as
security
related,
so
this
is
looking
so
the
security
dashboards
looking
at
all
of
the
projects
under
get
lab
org
so
granted.
That's
not
just
easy
and
cee
there's
a
lot
under
there,
but
it's
it
Flags
when
it
finds.
What
it
believes
is
the
secret
and
those
might
be
false.
Positives
I,
don't
know,
but
it's
just
another
test
that
we
run
or
that
it's
checking
on
later
ones
are
do.
A
You
have
a
link
for
it
and
I'm
happy
to
jump
in
as
well.
I
think
I,
think
security
and
quality
are
are
highly
intertwined,
so
any
anything
that
we
can
help
ensure
that
no
secrets
of
being
leaked
and
asking
our
counterpart
and
test
automation
engineers
to
to
highlight
we're
happy
to
do
so
as
well.
Is
there
an
issue
or
link
you
can
please
share
happy
to
report
not
only
yeah.
Thank
you.
D
D
Question
for
you,
at
the
we
just
got
back
from
gitlab,
contributes
and
I
am
curious
in
terms
of
your
team
and
the
interaction
that
you
experienced
that
contribute
what
are
some
things
that
maybe
you
experienced
as
a
team
that
were
unique
to
being
together
face
to
face
of
co-located
that
we
don't
get
to
do
remote,
maybe
the
summer.
The
question
is:
what
was
the
value
that
you
feel
that
you
got
out
of
contribute
for
your
team.
A
A
Unfortunately,
not
all
of
them
could
make
it
there's
some
so
not
from
Pakistan
the
visa,
the
visa
situation
and
we
got
denied
and
Rami
I
wasn't
there
and
a
few
members
couldn't
join
as
well.
So
the
interaction
was
great.
I
think
the
level
of
energy
and
enthusiasm
in
person
is
is
different
than
on
a
team
call.
A
We
did
use
the
the
did
team
dinners
to
bond
better
with
our
counterpart
teams,
so
we
didn't
had
a
specific
department
dinner
per
se,
but
each
counterpart
members
went
out
with
their
respective
stage
group
so
building
the
relationship
that
is
important,
I
think
that's
something
you
don't
get
to
do
every
day
and
yeah
next
time,
I'm,
hoping
that
more
more
of
a
more
of
us
can
can
be
there
to
interact
in
person
but
I'm
a
funny.
A
funny
comment
from
one
of
my
team
members,
I
think
the
level
productiveness
I
think
doesn't
change
I.
A
Think
because
everybody
was
a
contribute.
Less
people
were
responding
to
the
issue,
so
if
you
were
working
like
you
actually
are
less
productive
because
nobody
is
responding
to
reissue,
so
I
would
say
that
it,
a
sync
and
remote
is
winning
here,
because
we
get
more
stuff
done
at
home
per
se
in
terms
of
reaction
time.
So
that
was
an
interesting
remark
from
one
of
my
team
members,
yeah.
D
A
So
we
got
21
minute
mark
so
we'll
open
for
a
few
more
questions
and
we'll
probably
end
up
20
26
27,
so
I'm
good
time
for
the
company
call.