►
From YouTube: Development Group Conversation (Public Stream)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
When
I,
I
didn't
know
anything
about
my
way
around
the
metrics,
but
I've
I've
learned
a
little
bit
more
now
to
be
able
to
dig
into
these
things,
but
I
just
asked
for
some
more
help
on
exactly
what
we're
trying
to
fix,
because
I
like
to
do
that
before
any
performance
improvements.
C
C
C
C
Good
the
wonderful
world
of
permissions-
and
I
will
simply
take
the
recording
afterwards
and
put
it
on
gitlab
unfiltered,
as
this
doesn't
seem
to
work
right
away,
I'm
not
able
to
log
in
with
gitlab
unfilter,
good,
hello,
everybody
and
good
morning,
good
afternoon
and
good
evening
or
a
good
day,
as
someone
already
mentioned
for
the
8-pack
inclusiveness
and
very
happy
to
welcome
everybody
to
the
today's
development
group.
C
Conversation
that
I'm
hosting
today
as
christopher
is
currently
out
on
turret
duty,
and
I
hope
you
had
already
some
time
to
take
a
look
at
the
slides
end
or
the
pre-video,
and
I
even
see
that
we
already
have
a
couple
of
questions
in
the
agenda.
So
we
will
get
started
on
those
and
then
simply
take
it
from
there
and
hopefully
add
your
other
questions
in
the
meantime.
C
If
victor
is
on
the
call,
if
not,
then
I'm
going
ahead
by
verbalizing,
the
question
was:
do
we
have
any
data
about
how
engineers
feel
with
all
the
engineering
allocations
and
infra
dev
issues
going
on?
I
see
that
the
current
retention
numbers
are
beautiful
and
would
like
to
make
sure
that
they
stay
similar.
C
Yes,
we
we
would
also
love
to
have
the
numbers
stay
at
that
level.
They
are
really
good
at
the
moment
on
a
very
high
level
retention
wise,
and
what
do
I
think
about
how
the
teams
react
to
it?
C
I
think
it's
a
really
important
topic
to
take
ownership
of
your
area
from
everyone
included,
so
this
starts
from
everyone
who
is
involved
into
the
what
a
court
process
of
developing
something-
and
this
means
also
really
that
you
take
ownership
of
your
area
and
the
worst
thing
that
can
happen
to
your
area
or
to
your
group.
C
So
they
want
to
have
that
stuff
perform
reliable
and
fast,
and
I
believe
that
seeing
how,
as
soon
as
people
are
assigned
to
a
topic,
this
is
also
very
can
be
very
interesting
and
sometimes
very
challenging
topics
which
makes
engineers
also
interested
even
worse
to
some
extent.
So
this
means
that
engineering
allocation
seeing
what
happened
in
an
incident
as
it's
not
that
easy
that
you
can
relate
a
one-to-one
thing
between
something
that
happened
and
what
the
cause
of
it
was.
C
I
would
say
so
I
think
that's
exactly
the
topics
that
we
are
also
trying
to
prepare
engineers
for
to
motivate
them
around
that
that
it's
really
everything
that
invests
that
we
will
invest
into.
That
is
a
huge
improvement
for
gitlab.com
for
our
users,
for
the
users
that
are
using
self-managed,
because
normally
every
performance
or
reliability
improvement.
C
That
is
right
now,
rather
targeted.com
is,
of
course,
adding
huge
benefits,
also
to
self-managed,
because
we
are
making
something
faster
with
making
it
more
reliable
and,
as
we
are
seeing
so
much
growth,
especially
over
the
last
months
and
years.
This
also
simply
means
that
if
we
are
implementing
a
feature,
if
we
are
deploying
something
to
production,
this
is
like
landing
in
a
huge
river
which
is
flowing
in
some
areas
faster.
In
some
areas
a
little
bit
slower,
but
this
really
means
that
we
need
to
think,
especially
for
the
future.
C
What
we
can
do
better,
because
features
are
used
immediately,
even
in
seconds
or
minutes.
Thousands,
and
even
sometimes
millions
of
users
in
a
couple
of
hours,
so
anything
that
you
dropped
there,
which
is
not
performed
which
is
not
reliable,
will
cause
problems
for
the
users
and
coming
back
to
being
a
little
bit
selfish.
C
What
are
things
that
we
can
do
better
in
the
future
that
we
are
not
landing
there
again,
and
this
can
go
from
from
anything
from
reducing
technical
debt,
which
is
a
super
wide
term.
But
we
are
looking
also
into
that
about
alerting
and
monitoring
over
to
know
how
we
there's
a
another
course
postgres
course
just
was
announced
over
to
really
learning
how
to
work
in
such
a
scalability,
sensitive
environment,
that
we
have
dot
com,
and
that's.
C
I
believe
that,
where
where
we
are
trying
to
have
everyone
focused
on
improving
that
situation,
and
I
believe
that
our
teams
and
our
engineers
and
seeing
how
they
handle
in
such
a
great
way
such
incidents
is
something
that
they
really
like
to
work
on
and
making
that
better.
So
there
is
one
addition
from
john
looking
at
this
forum
from
the
security
side.
I
really
welcome
that
focus
on
the
security
backlog.
C
Yes,
and
we
want
to
burn
down
this
and
we
are
increasing
even
our
focus
and
also
the
the
the
people
that
we
have
on
this
topic
to
really
burn
down
security
topics,
especially
in
many
texas,
as
they
are
the
most
lending
in,
because
they
are
responsible
for
authentication
and
authorization.
So
this
means
a
lot
of
stuff
lands
there,
but
this
is
definitely
something
we
are
currently
investing
more
to
have.
A
C
The
time
that
we
get
to
zero
on
that
and
get
this
time
to
zero
from
some
further
time
out
rather
to
in
next
three
months,
and
that
is
something
we
are
also
investing
in.
I
I
get
that
security
issue.
Fixing
is
a
very
specific
topic,
but
I
think
there
is
also
some
really
interesting
technical
topics,
some
also
long-term
strategy
post
that
we
can
set
to
not
get
into
some
situations
and
basically
make
our
software
more
bulletproof
and
harden
some
ends
and
improve
some
processes.
C
So
I
think
that's
also
not
just
about
fixing
that
specific
issue,
but
also
having
that
take
off
ownership
of
maybe
a
team
that
is
not
really
reliable
for
that,
but
coming
together
and
pulling
together
to
make
the
whole
on
a
global
perspective
better.
I
think
he's
a
is
a
great
target
and
and
seeing
how
everyone
is
invested
in.
It
is
something
really
nice
to
see.
C
B
Thanks
about
slide
seven
about
the
lcp
kpi
we're
mentioning
to
lower
the
target
between
1.5
and
one
second
in
the
near
future,
I'm
cursive.
This
is
a
general
statement
for
all
the
pages,
because
the
graph
that
we
display
there
is
just
about
the
project
on
page
performance.
But
we
look
at
the
leaderboard
and
graphina.
We
can
see
that
I
say
more
than
50
percent,
it's
more
about
two-thirds
of
the
pg
that
are
above
that
target.
B
So
I
think
we
should
consider
maybe
an
adjustable
target
depending
on
the
page,
because
some
definitely
might
need
more
time
and
they
might
look
more
acceptable
to
take
a
longer
time
to
display
the
information
like
loading,
a
large
blob,
for
instance,
and
I'm
also
a
bit
worried
about
the
fact
that
we
are
adding.
We
are
getting
more
metrics
and
drink
and
trying
to
make
them
a
bit
more
toothy.
So
if
we
just
drop
the
bar
on
that
target,
this
will
add
another
red
metric
for
several
groups.
B
And
I
wonder
if
you
go
that
route,
if
we
should
highlight
the
priority
of
this
kind
of
performance,
improvement
versus
all
the
other,
like
your
budget,
for
instance,.
C
We
had
a
very
clear
focus
in
q2
to
reduce
the
overall
number
of
routes
that
are
have
good
lcp.
So
just
as
a
short
explanation,
lcp
means
largest
content
for
paint.
This
means
like
when
is
the
biggest
element
of
the
page
painted,
which
is
something
that
is
currently
called
web
vitals,
which
is
becoming
the
industry
norm
at
the
moment
for
loading
performance.
So
this
means,
when
is
the
page
ready
and
feels
ready
that
you
actually
can
interact
with.
C
Google
is
basically
suggesting
a
target
of
2.5
seconds,
which
is
also
the
target
that
we
had.
So
we
are
using
those
industry
standard
metrics.
They
even
started
to
take
this
into
consideration
for
their
search
algorithm.
So
it's
even
playing
some
part
of
seo,
but
the
main
topic
is
really
a
user
perception
of
performance,
so
they
did
a
lot
of
research
on
it.
What
is
perceived
fast
and
what
not
and
the
guidance
is
everything
below
2.5
is
good.
Now
comes
with
a
question:
should
we
rather
focus
on
that
part?
C
So
the
actual
kpi
that
we
currently
have
on
the
lcp
is
around
the
explore
page,
which
is
one
of
the
most
standard
pages.
It's
one
of
the
rawest
pages,
so
there's
not
too
much
going
on,
which
is
loading
additionally.
So
it
gives
us
a
really
nice
baseline.
It
dropped
by
a
lot.
But,
as
you
mentioned,
we
have
some
more
complex
just
talking
about
the
web
id.
We
are
loading
a
lot
of
things.
C
We
are
doing
a
lot
of
things
and
yeah,
so
this
is
rather
way
more
complex,
and
one
thing
that
I
suggested
in
the
last
key
review
meeting
was
also
that
we
are
rather
looking.
What
we
did
now
in
the
last
quarter
is
to
take
a
look
at
all
the
routes
that
we
are
measuring
and
I
think
it's
115
or
so
at
the
moment
that
we
are
measuring
every
four
hours
on
github.com.
C
There
are
also
a
couple
of
other
topics
like
hot
cash,
which
is
something
that
we
rather
measure
against
that,
because
right
now
we
are
measuring
everything
against
cold
catch.
So
this
means
this
is
simulating
a
user
who
is
coming
for
the
first
time
ever
with
their
browser
to
one
of
those
pages.
It's
not
a
consecutive
wizard,
so
this
is
rather
a
classic
website
loading
metric.
So
this
might
be
also
something
that
we
switch.
Quality
is
already
testing
without
cash.
We
have
our
outside
statistic,
runner
who
is
doing
hot
cash,
etc.
A
C
Is
also
one
more
thing
that
we
are
looking
and
we
hope
to
get
this
also,
the
next
quarter,
which
is
the
working
group
around
front
and
observability.
We
are
trying
to
create
sentry
and
that
would
give
us
automatically
out
of
the
box
performance
metrics
by
re-user,
metrics
and
highlighting
if
they
go
bad
or
worse,
in
that
sense
end
with
full
stack
traces
from
that
user.
So
we
can
really
go
back
in
time
and
say:
okay,
this
page,
this
user
had
on
this
page
a
very
bad
experience.
C
What
did
he
call
what
was
happening
on
that
page
and
we
basically
see
it
like?
We
see
the
error
reports
in
century.
We
see
also
like
those
performance
reports
which
would
be
a
really
great
feature
to
have
not
just
synthetic
measuring,
but
also
the
real
world
measuring
and
one
last
thing,
which
is
also
something
we're
looking
very
detailed
into
is
right.
Now
everything
is
about
loading.
Metrics
loading
is
nice.
C
Loading
is
super
important
because
if
it
doesn't
get
to
your
screen,
you
can't
interact
with
it
and
it's
the
first
thing
you
start
always,
but
what
users
will
perceive
as
fast
or
slow
is
on
one
hand
the
loading.
The
next
thing
is
reliability,
if
that
thing
always
loads,
if
it's
sometimes
slow,
etc
and
on
the
third
part,
is
the
injection
with
it.
If
I'm
clicking
on
a
new
issue
button,
how
long
does
it
take
if
I'm
going
to
a
project?
I
click.
C
So
we
have
currently
one
workflow
where
we
tested
this
out,
so
you
land
the
project
homepage,
you
go
two
directories.
Deep
click
on
the
file
go
back,
click
on
another
file.
How
long
does
the
whole
thing
take
and
if
we
are
making
improvements,
how
much
will
be
able
to
improve
through
this
whole
process,
because
this
is
much
more
like
a
simulated
custom
interaction
rather
than
load?
Thank
you.
B
Thanks,
I
really
like
the
latest
proposal
and
do
you
expect
that
to
be
a
per-group
approach
like
asking
every
group
to
come
up
with
a
workflow
that
is
related
to
their
feature?
Are
we
looking
more
a
very
company
one
initiative
that
will
come
down
to
each
group
as
if
one
of
their
product
areas
end
up
into
one
of
the
workflows
that
have
been
defined
there.
C
Yes,
I
see
this
currently
from
two
directions:
one
thing
that
we
are
working
on
the
front
and
observability
working
group
right
now,
together
with
also
operations
and
injuries,
for
example,
is
one
that
is
really
to
get
slas
not
only
to
backend
but
also
to
front
end
and
have
have
really
measurement,
and
that
would
be
something
we
can
provide
from
one
point
to
everyone
and
simply
backfill.
This
into
error.
C
Budgets
have
clear
error
budgets,
not
just
back
in
or
front
or
one
group,
but
rather
have
back
in
and
front
and
separated
and
can
basically
provide
direct
feedback
there,
not
just
on
errors,
but
also
on
sa
on
on
aptx
scores
and
stuff,
like
that,
and
the
other
thing
is
really
providing
some
sort
of
tooling
some
sort
of
infrastructure,
like
we
have
with
the
site,
speed
setup
to
extend
it
basically
and
say:
okay,
look:
this
is
how
you
can
do
and
measure
such
a
workflow.
And
now
you
need
to
go
into
the
first
step
was
easy.
C
A
Sure
thanks,
I
was
encouraged
to
see
the
the
focus
on
tech,
debt
and
even
possibly
taking
large
chunks
of
time
for
tech
debt,
but
on
large,
mature,
complex
products
like
git
lab,
which
is
what
I've
worked
on
most
of
my
career,
there's
no
end
of
tech
debt,
it
never
ends,
it's
something
you
live
with
and
the
the
idea
is
to
prioritize
it.
You
know
is
this
something
that
I
don't
want
to
take
take
on
and
the
danger
is
often
people
are.
A
In
absence
of
anything
you
know
specific
or
intentional,
is
whatever
they're
familiar
with,
or
whatever
is
the
thing
bothering
them
the
most,
which
is
not
necessarily
the
highest
criteria
of
tech
debt,
and
so
this
is-
and
I
I
have
this
algorithm
that
goes
through
my
head
every
time
like
is
this
a
hill
I
want
to
die
on.
This
is
something
I
mentioned
it's
like
is
this
security?
Is
it
like
only
customer
data?
Could
it
give
us
on
the
front
page
of
hacker
news?
How
annoying
is
it
even
if
it
only
wastes
five
minutes?
A
Does
it
make
me
hate?
My
life
and
all
of
these
other
things
of
whether
this
is
important
and
whether
I
even
mention
it
or
not,
and
so
this
is
a
good
start,
but
there's
other
things
also
subtly
that
I
think
people
miss.
For
example,
you
know
even
if
it's
a
little
thing
and
it
isn't
necessarily
in
supportive
future
if
it's
just
really
annoying
and
you
have
to
deal
with
it
every
single
day,
even
if
it
only
takes
two
minutes.
A
That
impacts
other
extremely
important
metrics
retention,
employee
morale-
and
these
are
all
you
know,
lagging
symptoms
of
tech
debt
and
how
we
address
it
and
how
well
we
prioritize
it.
So
I
we're
very
metrics
driven
about
everything.
I
would
love
to
see
us
inject
more
of
that
around
tech
debt
in
gitlab,
and
interested
to
hear
your
thoughts.
Yeah.
C
That
is
a
really
good
part
of
putting
metrics
to
it,
and
I'm
more
than
happy
to
see
any
sort
of
proposal
around
that
and-
and
what
we
were
looking
into
right
now
at
the
moment,
is
to
stop
it
that
we
are
ingesting
more
and
more
and
more
and
more
and
more
and
and
how
especially
looking
at
incident
related
topics.
C
Are
there
any
patterns
that
we
can
stop
right
now
or
that
we
can
prevent
or
a
good
example
is,
for
example,
that
what
we
are
looking
into
is
do
we
need
any
sort
of
sign
off
by
engineering
that
all
loose
ends
are
tied
before
something
is
basically
kept
to
itself
working
in
production,
or
this
can
also
happen
during
product
feature
development
that
suddenly
a
feature
is
cancelled.
C
You
are
moved
off
of
it
and
it's
simply
put
on
ice,
but
the
queries
are
there
or
the
feature
flag
and
it's
in
production
and
it's
happening,
and
I
think
we
we
need
some
gates
that
we
need
to
define
or
figure
out,
maybe
around
feature
flags
and
removal
and
have
some
processes
in
place.
That
engineers
have
the
saying
on
look.
This
is
done.
I'm
happy
with
it.
I
can
go
to
sleep
at
night,
because
my
thing
was
just
deployed
to
production.
It
works
nicely.
We
it's
running.
C
I
think
we
don't
need
to
do
anything
else
to
keep
it
alive
and
that
it's
not
disturbing
and
that's
where
we
need
to
get
to-
and
this
might
also
even
impact
already
the
state
of
when
something
was
was
done.
The
merge
request
is
merged.
That
is
there
anything
to
remind
engineers,
for
example,
to
check
24
hours
three
days
later.
C
Statistics
error
reports,
error,
budgets,
everything
so
you
know
okay,
this
happened
with
this
landing
in
production
and
that
you
basically
get
automatically
pings
on
the
mr.
Is
that
something
that
would
help
and
and
basically
reduce
that
amount,
so
that
engineers-
and
I
think
it's
it's
really
important-
that
everyone
is
aware.
C
That
are
that
I
risky
in
the
sense
I
I
was
running
over
something
a
couple
of
weeks
ago,
where
there
is
simply
an
old
pattern,
and
this
old
pattern
is
repeated,
because
people
see
how
it
was
done
in
the
first
place,
and
they
do
it
and
even
a
year
later,
but
it
was
currently
already
replaced
by
something
else.
How
can
we
put
harder
gates
in
there
that?
C
No
one
is
even
if
this
stiff
thing
is
running
and
it's
okay,
but
it
shouldn't
be
followed
up
and
shouldn't
be
repeated
that
we
make
sure
that
no
one
is
repeating
that
thing
and
how
we
can
prevent
those
things,
and
we
will
be
opening
up
the
floor
for
sure
on
those
discussions
very
very
soon.
As
soon
as
we
have
done.
The
first.
C
Action
on
it
and
basically
proceed
and
iterate
from
there
how
we
can
really
get
to
a
state
where
we
have
it
under
control
and
it
doesn't
get
into
areas
where
it's
explode,
because
you
have
so
many
options
how
to
get
get
out
of
it,
either
by
refactoring,
replacing
complete
areas
and
so
on
and
so
on.
So
but
yeah
metrics
all
for
it.
A
Cool
thanks
for
the
answer.
Yeah,
it's
a
fight
against
entropy,
you're
you're,
never
going
to
get
rid
of
it
completely.
It's
just.
Can
you
manage
it
within
some
guidelines?
Thanks.