►
From YouTube: Retrospective 12.2 (Public Livestream)
Description
Please add feedback here: https://docs.google.com/forms/d/12-QPpvggEsqCvZnDCnuCjqP53joKwjRMPy4PH-Mqp-I/viewform?edit_requested=true Thank you!
A
Good
morning,
good
afternoon,
good
evening
get
lab.
This
is
the
12
to
retro.
Respective
summary,
my
name
is
Christopher
law,
vaults
and
I.
Am
the
current
victim
for
running
this
retrospective
summary,
so
I'm
going
to
go
ahead
and
get
started,
we're
going
to
start
with
for
those
interested
in
the
document?
Please
look
at
the
agenda.
If
you
don't
have
the
agenda
handy,
I'll,
post
it
in
the
chat
window,
so
you
have
that
available
as
well.
A
We'll
go
ahead
and
get
started
with
previous
retrospective
improvement
tasks
and
updates
around
that
so
Craig
combs
put
in
that
he
is
actually
I'm
from
the
previous
ones,
basically
increase
the
number
of
DB
reviewers
and
maintain
errs
and
as
of
today,
we're
at
7
members
who
have
now
a
certain
started
to
be
in
the
DB
maintainer
training
program.
So
thanks
to
all
who
volunteered
and
have
done
that
before
that's
an
increase
of
4
people
since
the
last
retrospective,
that's
a
very
encouraging
sign
as
we
get
more
deepening
taters
in
place
and
Mac.
B
Happy
to
so
yeah
one
of
the
shipments
we
did.
We
had
any
toilet
so
as
we
transition
total
team
to
use
global
stage
tables
and
deprecated
to
like
as
a
team
levels,
and
for
you
apps.
We
made
a
number
of
improvements.
We
haven't
hit
90%
success
rate,
yet
we
did
hit
100%
for
a
few
days
and
we
do
have
four
looking
plans
to
make
this
even
better
in
the
future
and
also
some
periscope
charts
to
visualize
the
status
on
to
James.
A
A
That
issue
hasn't
been
completed,
but
we
improved
our
process
overall
with
the
subsequent
issue
394,
which
is
on
the
customer,
I
guess
I.
Guess
it's
an
earlier
issue
based
on
the
way
it's
labeled,
but
basically
we
completed
that
one.
So
you
could
think
of
it
as
a
subset
of
that
and
making
progress
never
associate
with
it
cool.
A
A
We
go
person
by
person
through
how
we
can
improve
for
the
last
section
and
then
I'll
summarize
for
improvements
for
the
next
release.
So
when
went
well,
this
month
definitely
have
seen
a
lot
of
good
comments
around
collaboration.
This
looks
like
if
you
play
through
there.
What
you'll
see
is
a
lot
of
new
folks,
basically
being
helped
by
folks
who
have
been
at
the
company
for
a
longer
period
of
time.
A
Also,
some
great
pointed
feedback
along
the
way
from
folks
who
were
thinking
about
problems
deeply,
so
that
was
very
encouraging
from
that
perspective
and
it
seemed
like
overall,
there
was
a
lot
of
helping
each
other
to
make
sure
the
were
successful.
All
successful
around
that
sort
of
seeing
good
collaboration
there
in
the
results
section
was
shipped
an
MVC
of
our
dag.
A
A
Let's
see
your
quality,
clear
steps
to
improve,
see
if
I
plan,
yes,
so
we
now
have
clear
steps
to
traverse
the
pipeline,
so
that
should
overall
help
us
to
improve
how
we
execute
particularly
around
those
aspects,
that's
great
to
see
that
we've
defined
those
steps
associated
with
it
and
then
the
last
one
to
call
out
is
the
graph
QL
starting
to
prove
its
value,
basically
on
the
creative
knowledge
team.
So
that's
for
encouraging
to
see
that
success
related
to
it
from
that
perspective,
as
well
unto
the
transparency
section.
A
Basically,
this
is
a
great
observation
of
how
our
transparency
value
has
really
kind
of
shown.
A
number
of
places
the
places
I
saw
was
basically
in
a
couple
places.
One
everybody
was
informed
on
the
team
itself,
but
then
also
in
communicating
with
customers
and
also
talking
about
issues
where
customers
may
want
to
in
fact
know
what's
going
on,
it
was
extremely
valuable
as
well,
so
that
was
a
very
encouraging
aspect
to
the
fact
that
we
use
transparency
as
one
of
our
values
and
it's
actually
helping
to
make
us
communicate
more
effectively
associated
with
it.
A
A
The
other
thing
is,
we
all
tend
to
focus
on
the
things
that
we
want
to
improve
on
more
than
we
do
the
things
we
do
well,
so
just
FYI
on
that
I'm
that
that's
not
to
say
that
we
don't
do
things
well,
it's
to
say
that
we
have,
we
just
like
to
think
we're
focused
be
the
most
impactful
cool,
so
in
the
planning
section
for
what
went
wrong
this
month.
General
theme
here
was
some
basic
estimation
and
coordination
problems
or
challenges
that
we
had
around
this
area.
A
A
Some
coordination
challenges,
/
great
knowledge
side
being
observed
in
the
access
section.
This
is
something
that
I
we
focused
on
a
couple
months
ago.
It
looks
like
grant
focus
on
again.
We
had
onboarding
as
an
issue
were
new
new
folks
getting
on
board
we're
having
issues
getting
access
requests.
This
appears
to
be
in
other
areas
beyond
that.
We're,
like
example,
is
as
a
GCP
access
and
also
just
a
few
other
specialized
access
points
that
we
need
to
do
so
we'll
try
to
dig
in
there
and
understand
better
how
we
can
improve
that
developer.
A
Experience
associated
with
that
and
then
the
last
section
where
we
had
a
theme
was
basically
in
results.
In
particular,
the
maintainer
roulette
doesn't
accounts
for
what
the
current
status
is,
doesn't
necessarily
anticipate
the
status
for
maintainer
so
that
that
led
to
some
delays
associated
with
it,
just
based
on
the
fact
that
people
going
on
vacation
associated
with
it
by
the
way
I
did
a
survey
of
the
development
organizations
I
think
were
between
85
and
90
percent
availability
for
the
month
of
August
just
FYI.
A
So
so
that's
definitely
impacting
us
as
well
from
that
perspective,
and
then
there's
also
some
confusion
on
the
release
process
of
when
to
pick
into
an
auto
deploy
and
when
it
can
be
used
associated
with
that
which
we
have
below
I
believe
in
the
areas
to
improve,
which
is
quickly
where
we'll
move
to
next
associate
with
that,
and
we
can
go
ahead
and
hit
that
section
for
how
we
can
improve.
And
this
is
where
I
hand
it
off
the
folks
for
them
to
talk.
Matt.
Are
you
available
to
talk
about
tooling,
yep.
D
So
I'm
Matt,
nor
for
them
setting
the
monitor
team
in
the
one
area
that
we
were
that
goes
along
with
what
Christopher
was
just
talking
about,
was
kind
of
the
pace
that
we
auto
deploy
to
get
lab,
comm
and
figure
out.
The
core
issue
is
understanding
when
our
changes
have
been
deployed
so
and
being
able
to
verify
everything
there
so
one
there
is
an
issue
open
that
could
at
least
surface
what
is
currently
deployed.
A
Cool
one
other
feature
that
is
also
coming
is
is
for
a
given
M
R.
You
will
be
able
to
tell
when
it
actually
is
deployed.
So
that's
a
basic
it'll
update
the
mr2
reflect
it
that
it
has
been
deployed.
So
that's
another
way
of
cuz,
it's
getting
information
on
it.
It
doesn't
necessarily
predict
when
it's
going
to
get
the
pulley,
but
it
at
least
gives
you
information
relative
to
its
that's,
encouraging
a
cool.
Can.
Are
you
available
to
talk
about
a
secure
end
of
monitoring
aspects?
Yes,.
E
E
A
One
of
the
thing
to
notice
is
that,
if,
if
it
turns
out
that
we're
actually
using
our
product
and
dogfooding,
we
should
also
note
that
as
well,
depending
on
which
particular
pieces
of
monitoring
we're,
basically
looking
at
with
the
general
tolling
versus
a
something
specific
that
we
would
would
want
to
dog
food
from
that
perspective,
yeah
cool,
all
right,
Andy
Andy,
are
you
available
to
talk
about
yours.
F
This
came
out
of
the
most
recent
milestone
where
we
had
some
planning
that
kind
of
put
us
up
against
the
deadline
as
well
as
we
had
some
dependent
issues
that
we
wanted
to
plan
in
advance
of
those
UX
issues.
Getting
done
as
well
as
not
is
trying
our
best
not
to
be
still
defining
the
experience
throughout
the
milestone.
Hopefully,
we
have
that
experience
to
find
prior
to
that
milestone
beginning
so,
we've
worked
and
that,
like
tissue
to
address
some
of
those
issues,.
A
Thanks
Andy
coldly
got
in
the
show
up
and
we
can
definitely
track
that
if
you'd,
like
I,
just
had
the
retrospective
label
and
we
can
keep
an
eye
on
it.
From
that
perspective,
a
cool
Elliot
Rushton
is
not
here
from
verify.
I'll
read
it
when
Rich
and
unplanned
changes
appear,
we
should
hop
on
sink
called
speed
up
catching
of
this
view
with
everything.
So
this
is
kind
of
a
reflection
of
the
fact
that
sometimes
you
do
need
to
synchronize
coordination.
A
We
want
to
move
a
sink
as
much
as
possible,
but
when
there's
unplanned
changes,
oftentimes,
it's
it's
a
beneficial
or
best
action
is
to
take
a
synchronous
approach.
That's
basically
syncing
up
from
that
perspective,
so
that's
just
a
thing
that
their
team
will
be
attempting
to
do
here
and
then
are
you
available
to
talk
about
the
next
one
I'll.
C
C
If
they
want
to
have
an
idea
of
the
questions
they
should
ask,
or
they
should
discuss
at
the
beginning
of
the
milestone,
I
noticed,
depending
on
how
comfortable
you
are
with
working
remotely.
Some
people
already
have
call
scheduled
with
their
counterparts
and
in
a
plan.
But
for
those
who
are
you
or
Not
sure
I
handle
it?
I
was
just
going
to
offer
some
guidance
and
make
some
updates
to
the
handbook
about
that
as
well,
and
test
coverage
Nick.
B
Thank
you,
so
yeah
I
wanted
to
call
it
out.
It's
a
big
push
from
quality
to
the
as
we
evolve.
Enterprise
customers
have
enterprise
level
expectations
from
us,
so,
in
addition
to
unit
integration
test,
it's
it's
crucial
that
we
need
to
test
and
even
retest
end
to
end
scenarios
from
a
storm
point
of
a
user.
So
quality
is
making
a
huge
push
for
this.
In
q3
we
are
aiming
to
have
all
enterprise
features
covered
by
either
integration
or
engine
tests.
Obviously,
that
needs
to
be
broken
down,
crooned
and
groom.
B
So
it's
running
at
the
most
performant,
and
we
don't
have
to
cook
it
to
us,
but
I
just
want
to
socialize
it
and
make
sure
this
is
like
a
make
it
visible.
This
is
a
improvement
effort
that
we
trying
to
make
long
term
in
the
future.
Once
this
is
done.
We
also
want
to
look
into
how
development
team
can
call
on
these
test
Suites
going
forward
and
then
have
them
be
again
an
early
warning
sign
where
your
test
is
failing.
B
You
should
know
a
bit
before
it
hits
staging
and
production
moving
on
to
community
contributions,
so
yeah
we
we
identified
some
some
miles
that
have
fallen
the
wayside
and
part
of
its
because
that
we
don't
have
try
to
include
me
automation
in
in
other
satellite
projects
and
what
I
mean
best
I,
like
projects
are
the
ones
who
are
not
Cee.
So
we've
been
able
to
some
of
these
rules
for
diddly
runner
and
get
lab
pages
going
forward,
but
I
think
going
forward.
B
A
Meg
yeah
so
improvements
for
the
next
release.
These
are
the
four
that
jumped
out
at
me
is
lying
to
track
at
high
level
them
open
to
suggestions
or
additional
ones
that
we
want
to,
in
particular
the
ones
that
I
felt
like.
We
should
definitely
make
sure
we
see.
Some
progress
on
in
the
next
month
is
tolling,
basically
around
the
basically
being
able
to
see
em
ours
being
deployed,
my
nursing,
it's
gonna,
get
done
in
the
next
week.
So
that's
a
easy
one.
From
that
perspective
to
see
improvements,
darvis
points
about
communication
and
the
future
flags.
A
Getting
that
update
in
the
handbook
to
represent
best
practices
and
also
to
to
represent
making
sure
to
not
always
plan
for
optimal
case
plan
for
more
of
a
common
case
may
be
a
better
that
are
there
planning
scenario
from
that
perspective
and
then
some
X
comments
around
basically
improving
the
discover,
building
and
coaching
on
community
contributions.
Basically,
that
issue
look
like
a
big
one.
A
This
week,
value
or
community
contributions
want
to
make
sure
we
respond
and
are
there
quickly
on
them
and
then
that
our
use
of
monitoring
tools
for
the
secure
team
I
felt
like
one
as
big
Todd.
You
may
have
openness
or
olivier,
open
this.
Essentially
it
we
may
need
to
narrow
down
scope
there,
because
it
is
pretty
broad
right
now
so
focusing
on
the
winner.
Two
things
that
we
could
improve
on
would
definitely
be
an
improvement
from
that
perspective,
cool
are
there
ones
that
I
miss?
A
A
All
right:
well,
thanks
everybody
for
a
great
retrospective.
This
is
a
lot
of
good
data.
A
lot
of
good
feedback,
I,
really
appreciate
it.
I
encourage
everybody
to
continue
to
do
that.
I
love,
seeing
the
follow
up
on
the
action
items
as
well,
so
that
we're
seeing
continuous
improvement,
I
really
appreciate
everybody's
hard
work
and
everybody
have
a
great
day
or
evening
or
night
thanks
everybody.