►
From YouTube: Quality Group Conversation (public)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone,
it's
the
top
of
the
hour.
I
will
take
a
few
minutes
for
folks
to
catch
up
on
the
slides,
but
we
can
go
ahead
and
take
questions.
I
see
that
rachel
has
one
already.
I
will
answer
that.
So
this
effort
focuses
on
performance
testing
as
a
first
class
citizen
upgrade
is
implicit
we'll
be
going
through
the
up
wave
flows
and
raising
issues.
If
we,
if
we
see
them-
but
the
focus
here
is-
is
performance
and
making
sure
that
on
self-manage
instances
they
are
not
regressing
or
degrading.
B
For
people
watching
YouTube,
it's
much
more
fun
if
the
questions
are
read
out
loud
because
they
cannot
view
the
Google
Doc,
plus
it's
just
fun,
for
the
people
asking
questions
to
ask
themselves
what's
the
stages
of
to
take
a
reference
implementation.
Thank.
A
A
B
A
It's
that
that's
just
on
me.
We
did
not.
We
put
a
work-in-progress
on
the
reference
architecture
when
it
was
when
you're
working
on
it.
The
lift
embargo
is
to
remove
the
work-in-progress
disclaimer
on
the
reference
architecture.
From
my
understanding,
this
has
been
done
and
it
can
be
used
to
communicate
towards
towards
a
customer.
The
lifting
bar
culture
may
not
be
a
good
term
for
this,
so
could
definitely
change
it.
B
Make
I
didn't
see
any
other
questions
to
ask
this
Dimitri?
Try
it
out
Auto
DevOps
and
the
problem
is
that
for
a
lot
of
languages,
especially
the
ones
we
don't
use,
it
was
kind
of
broken.
Are
we
working
on
kind
of
I'm,
not
sure
I
should
call
them
into
n
tests.
They
probably
go
beyond
end-to-end
testing,
but
are
we
working
on
something
to
make
that
better,
I
think.
A
This
falls
into
localization
testing
and
I.
Do
I.
Do
not
think
we
have
enough
staffing
to
cover
this,
but
we
should
definitely
have
this
tract
and
and
scheduled.
I
think
it
is
important
too,
to
cover
this,
but
as
far
as
capacity
planning
and
staffing
goes,
this
hasn't
been
scheduled
for
q3.
If
it
is
a
high
priority,
we
can
definitely
reevaluate
if
there
an
issue
for
this.
If
I
can
take
on
and
creating
this
and
capture
this
ever
yeah.
B
If
there's
a
dish
for
sure
to
be
chased,
notes
of
the
things
she
found
and
when
you
say
localization
tree,
meaning
human
languages
or
in
programming
languages,
human
languages,
so
different
languages
as
you,
as
you
mentioned,
I
I
was
talking
about
language
programming
languages.
Oh
I,
see
if
you
use
order,
DevOps
its
order,
detects
the
language
you're
using
and
based
on
that
it
will
do
certain
things.
I
see.
A
Got
it
now,
my
apologies,
so
the
different
programming
languages?
Yes,
this
has
been
on
my
radar.
I
was
talking
one
of
the
issues,
I
think
the
that's
an
issue
out
there
that
we
would
want
some
integration
tests
on
model
DevOps
templates
with
different
languages.
It's
currently
as
part
of
backlog.
We
can
bump
the
priority
up
and
ink
and
groom
it
as
part
of
the
configure
test
gaps.
Analysis
provide
more
color
there.
A
We
wanted
to
use
API
testing
leverage
the
API
tests
and
not
have
a
UI
test
automation
and
have
sample
templates
on
station
or
kilobyte
convict
it
just
run
periodically
and
with
these
flavors
of
languages
and
report
back
there's
a
planning
issue
for
that
I
can
link.
You
can
see,
see
you
on
that
after
the
after
the
call
boom.
B
A
Do
not
have
a
good
number
on
that
right
now,
I
would
have
to
reword
back,
but
from
from
the
cost
analysis
of
test
parallelization
that
we've
done
they've
done
in
q3
for
QA
alone,
I,
don't
think
it
is
that
much
I
think
most
of
the
cost
is
actually
the
CI
pipelines.
So
everything
in
Amercia
press
I,
don't
think
the
QA
integration
tests
at
the
into
n
level
is
costing
that
much
a
thing
was
five
thousand
suit,
like
not
more
than
five
thousand
per
month.
A
So
and
yes,
they
are
really
the
same
thing
but
end-to-end.
We
could
also
be
using
api's
as
well,
and
that's
that
is
also
in
line
with
setting
up
multiple
instances
of
gitlab.
We
testing
geo,
for
example,
and
people
tend
to
use
the
term
integration.
We
should
probably
just
use
one
single
term
to
describe
this
as,
as
in
to
end
tests,
you're.
A
Because
when
you
make
a
workflow
and
into
an
workflow,
you
can
do
it
via
a
UI
action
or
use
an
API
to
skip
some
of
the
workflows,
but
still
cover
most
of
the
end
to
end
user
scenarios.
So
in
the
industry
it's
being
used
interchangeably
but
as
a
good
lab
I
think
we
should
formalize
on
one
term
and
I
would
suggest
using
any
one
tests
for.
B
B
A
A
C
A
A
The
engineering
quality
engineering
manager
is,
it's
becoming
more
of
a
focus
for
me
because
we
do
want
to
do
quality
engineering,
the
modern
way
and
there's
a
folks
that
come
in
as
QA
managers
that
manages
a
team
of
dismantled
testers.
But
we
don't
want
that.
We
want
people
that
have
done
test
infrastructure
optimized
for
productivity,
and
that
has
been
more
of
a
challenge
to
source
and
get
we're
currently
working
with
Rupert
and
Brittany
on
how
we
can
improve
this
I
also
see
a
trend
of
this
title.
A
C
A
C
B
B
A
I'm
sure
this
it's
in
our
our
list
of
backlogs,
we
were
planning
to
use
relapse
with
the
combine
reference
with
master
and
deploy
your
change
with
what
is
it
master
and
test
against
that
I
think
that
will
help
solve
this.
This
problem
I'm
not
sure,
if
that's
what
you
meant
by
Majhi
Chris
trains,
it
yeah.
B
So
merge
trains
are
is,
is
combining
it
with
master
I.
Think
it's
separate
from
the
review
app,
because
when
a
merge,
train
kind
of
fails
because
one
of
the
wagons
on
the
merger,
because
you
start
retesting
everything,
so
that
mean
would
be
redeploying
the
review
app
all
the
time
which
I
don't
think
is
ideal.
So
I
do
think
the
reveal
app
makes
that
will
be
based
on
the
feature
branch
but
I
think
before
we
can
start.
If
you
do
much
work
restraints,
you're
gonna
have
a
lot
more
testing
because
sometimes
you
have
to
retest.
B
So
that's
why
I
was
inquiring
about
the
cost
you
haven't
dear,
so
that
the
end-to-end
tests
don't
have
a
lot
of
cost,
give
a
handle
on
how
much
the
the
non
end-to-end
test
the
unit
the
on
the
morning
unit
test,
I,
think
I
think
in
real
snow
McClay
Chur
that
includes
integration
tests.
How
much
that
costs.
A
A
B
B
I'll
talk
a
bit
about
why
I
think
it's
important
I
think
at
some
point
we're
gonna.
We
now
have
over
a
hundred
thousand
tests
that
are
run
on
every
commit,
I,
think
we're
gonna
start
seeing
hey.
We
can't
run
every
test
on
every
commit
and
we
need
machine
learning
to
kind
of
figure
out
which
tests
break
when
you
change.
A
God,
thank
you.
I
think
this
falls
in
line
with
marrying
machine
learning
and
change
detection
and
have
a
list
of
data
that
we
can
reference
and
make
decisions
on
having
the
computer.
Do
that
we'll
make
sure
they
have
an
issue
to
capture
that
we
have
some
basic
change
in
tection
improvements,
but
I
think
this
is
an
addition
score
going
to
machine
learning
and
also
building
it
directly
and
get
lab
and
Doc
putting
it
ourselves.
Yeah.
A
Excited
about
closing
all
enterprise
in
to
entice
gaps,
and
that's
where
we
put
everybody
on
as
a
first
priority,
but
as
far
as
test
automation,
jr.
goes.
The
second
one
is
improve.
Our
CI
builds
and
pipelines.
I
think
I
would
love
to
see
folks
here
be
happy
at
their
jobs
and
with
that
we
need
to
make
sure
that
they
get
fast
feedback
and
we
have.
We
have
an
ambitious
issue
of
going
to
ten
minutes.
I
can't
promise
that
for
q3,
but
we
will
do
our
best
to
make
a
make
improvements
here
on
lowering
the
time.
A
A
A
Specifically,
we
want
to
run
this
on
all
the
versions
and
catch
up
to
the
latest,
which
is
12.3
or
12.4.
We've
done
this
for
11.9
in
11.8.
As
far
as
automated
test
process,
this
is
a
CI
configuration
in
the
performance
project
and
the
peer
using
get
lab
CI
to
run.
I
am
not
sure
how
we
could
make
this
available
for
customers.
B
B
B
Consider
it
so
it
should
should
work
on
the
in
the
same
way
that
site
speed
currently
works.
Are
we
all
ready
when
we
do
this
site?
Speed
already
tells
you
what
is
kind
of
the
performance
of
the
front-end.
This
will
be
stressing
the
performance
of
the
backend
okay,
then
it's
it's
it's
it's
useful
to
see
kind
of.
Is
it
getting
worse
over
time.
A
D
Yeah,
we
should
add
something
an
issue
explicitly
for
this.
I
know
it's
something
that
we've
been
discussing
also
withdrew.
As
far
as
how
do
we
make
this
extensible
I
think
that
this
path
forward,
something
similar
to
review
at
but
specifically
for
performance
testing,
would
be
fantastic.
I
can
work
on
my
keen
tissue
for
that
that'd.
B
Be
great,
and
maybe
we
should
we
now
call
site
speed.
We
call
that
performance,
testing,
I,
think
to
disambiguate
I
think
we
should
call
that
web
performance
and
this
disorder
we
can
call
it
load
performance
or
something
like
that.
I'm,
not
sure
about
the
second
name,
but
kind
of
have
them
as
separate
features.
B
A
Okay,
thank
you,
said
I'm
moving
on
to
0.9
Olivier,
thanks
for
the
great
work
on
label,
migrations,
automations
for
issues
and
merge
requests
what
about
ethics?
No,
we
do
have
a
thick
labeling
on
our
radar.
Yet,
if
you
can,
please
create
an
issue
into
it
in
our
issue,
tracker
the
trade-offs
and
then
you
could
put
that
in
our
backlog.
A
B
The
a
few
times
about
changing
priorities-
I,
don't
want
you
to
change
any
priorities
based
on
this
call.
I
think
you
I
think
the
presentation
is
very
impressive.
It
gives
a
great
overview
and
I'm
particularly
impressed.
You
gave
you
added
the
backlog
of
stuff.
You
would
want
to
do,
but
aren't
aren't
doing
it's
a.
It
gives
me
an
easy
opportunity
to
prioritize
something,
but
it
shows
it's
a
I
will
or
will
not
do.