►
From YouTube: Digital Experience Retro Video - Oct 7, 2021
Description
Digital Experience Handbook Page: https://about.gitlab.com/handbook/marketing/inbound-marketing/digital-experience/
Digital Experience Retro Doc: https://docs.google.com/document/d/1kMNiUF2UDuSrMDuzLyRi8OEhVxry_MJoYi38RmmWafY/edit?usp=sharing
A
Hi
everyone
welcome
to
the
digital
experience
team,
iteration
retro
meeting
we'll
go
over
what
went
well
and
some
things
to
improve
on
from
the
last
two
weeks
today
is
october,
7th
and
first
up
is
jess.
A
B
A
Yeah
our
little
side
outage
today
was
you
know,
spooky
and
the
first
time
since
I've
been
here
that
we've
had
like
a
full
kind
of
everyone.
Messaging
us
like
the
site
is
down,
but
within
about
20
minutes
we
had
a
solution
in
the
works,
even
if
our
pipelines
take
a
little
bit
longer
than
we
want.
We
found
another
workaround
and
it
was
just
good
to
see
everyone.
You
know
doing
things
and
trying
to
help.
So
the
team.
B
No
yeah
back
to
me
yeah
awesome,
tyler,
that
building
to
play
stuff
like
I
pushed
abroad
yesterday,
and
it
took
four
minutes
and
eight
seconds
like
I
literally
got
up,
came
back
and
it
was
on
prod,
which
was
unheard
of.
So
I'm
really
excited,
and
now
it's
nice
to
know
that
if
something
does
go
wrong
like
we
can
get
to
it
almost
instantly.
C
Depends
on
a
lot
of
other
factors
like
independently,
like
10
and
20,
but
like
you,
could
very
possibly
get
stuck
on
long
train
and
take
40
to
60
minutes.
So
that's
always
a
risk
here.
You
won't
until
you
do
because,
like
eventually
like
you
know,
this
is
just
a
fundamental
like
problem
of
like
you
know,
entropy
in
the
universe
like
as
this
grows
in
size,
the
time
must
also
grow
and
as
it
grows
in
complexity,
the
pipeline
must
also
grow
and
comply
like
we
will.
C
B
A
Get
out
of
here
we're
gonna,
lock
nathan,
out
of
the
docks
from
here
on
out
yeah
investment
vane.
I
made
my
first,
mr.
I
haven't
mercy
yet
so
I
haven't
seen
the
blazing
fast
feats,
but
even
just
the
organization
of
the
the
core
marking
site
everything's
easy
to
find
it's
lovely
to
work
with
it's
cool
and
I
enjoyed
it
so
good
job
to
everyone
involved
and
then
parker.
D
10-Year
anniversary
looks
great,
thank
you
so
much
tyler
for
getting
that
done.
It's
awesome
to
see
that
beautiful
event
template
get
put
to
use.
It's
really
really
good.
C
Thank
you
thanks
for
everyone
for
doing
the
event
template
it
has
been
the
the
building
of
it
has
been.
The
easiest
part
like
the
thing
that
could
have
been
the
hardest
has
been
smooth.
So
that's
great.
I
appreciate
everyone
like
I
said
it
in
the
release.
Video
before
like
laura,
like
your
countdown
component,
like
they're
like
hey,
can
we
get
a
countdown?
I
was
like
sure,
can
whatever
here's
the
countdown
I
was
like
laura
did
it.
I
don't
do
this
it's
great
and
the
same
thing
for
like
every
other
component.
C
In
this
thing,
like
it's
been,
it's
been
awesome,
so
yeah
big
ups
to
the
whole
team.
A
Cool
things
to
improve
on,
I
think,
even
as
the
first.
B
Yeah
there's
been
a
lot
of
changes
to
nav
recently,
I
think
we
changed
some
sizes
and
stuff,
let's
just
make
sure
we're
applying
it
to
both
both
repositories
and
then,
as
like
a
a
bigger
thing.
As
we
build
out,
I
guess
core
marketing
site.
We
have
to
make
sure.
I
know
we
talked
about
buttons.
Yesterday
it
was
a
topic
of
discussion,
we're
gonna,
try
and
make
those
the
same
in
the
old
and
slippers
and
core
marketing
site.
B
So
just
a
heads
up
going
forward,
I'm
going
to
try
and
like
maybe
once
every
sprint
go
and
like
find
some
element
and
then
try
and
make
it
cohesive
between
everything.
So
it's
going
to
be
a
bit
of
manual
work,
but
I
think
it'll
be
worth
it
in
the
end,
so
yeah,
let's
just
try
and
make
sure
that
we're
uniform.
D
I
have
the
next
and
it's
just
to
default,
to
building
new
stuff
in
the
new
repo.
So
if
we
got
a
new
template
just
default
to
building
it
there-
and
that's
like
a
personal
thing
for
me
too,
like
breaking
the
habit
of
like
oh.
D
A
Yeah,
you
know
with
the
on
the
subject
of
failing
pipelines,
I've
been
seeing
weird
stuff
with
my
mrs.
I
know.
Javi
also
showed
me
one
yesterday
that
had
a
couple
of
random
failures
and
it
just
re-triggers
itself
and,
like
you
know,
wears
itself
out
and
then
eventually
it
passes
and
everything's
fine.
It
just
seems
strange:
okay,
yeah
I'll.
Let
tyler
jump
in.
C
Oh
yeah,
just
you
know,
I
don't
think
we
know
the
the
root
cause
of
this,
but
if
there's
a
pattern
of
a
lot
of
apis,
a
lot
of
it
is
like
get
lab
rate.
Limiting
us,
gitlab
and
like
chad
has
like
started
an
issue
to
investigate
it.
The
nice
thing
is
the
more
that
you
do
in
the
buyer.
C
So
you
can
just
like
sail
through
that's
because
like
right
now,
it's
like
okay.
I
want
to
make
a
one
line:
change,
okay,
that
one
line
change
has
to
hit
all
the
apis
that
the
pipeline
runs,
don't
need
it!
Buyer
experience.
C
Does
none
of
that
and
it'll
push
directly
to
it
and
then
the
second
order
effect
is
the
more
of
those
little
one-liner
changes
and,
of
course,
you'll
have
to
like
first
make
a
200
liner
to
like
rebuild
the
page,
but
like
the
more
of
those
that
run
the
fewer
of
our
the
fewer
mrs
are
running
in
the
www.getlab.com
repository,
the
fewer
api
calls
are
being
requested.
C
We
reduce
the
load
on
the
systems
that
are
rate,
limiting
us
right
and
yeah
javi
next
partial
builds
but
not
and
he's
asking
in
the
chat,
but
not
in
this
way.
We
probably
we
might
be
able
to
figure
it
out,
but
like
right
now,
no,
fortunately
we're
small
enough
right
now
that
it
won't
impact
it.
C
And
then
those
will
have
a
lot
more
overhead
is
that's
just
off
the
cuff,
but
like
yeah
we're
seeing
a
lot
of
that
stuff,
and
I
think
it
just
has
to
do
with
rate
limits
and
increased
load
on
that.
So.
A
Do
we
want
to
do
action
items
does?
Do
we
want
to
talk
about
hobbies
thing
in
the
chat
time
based
pipeline
setup
so
that
fixes
defaults
being
ran
every
so
often,
I
think
that's
about
the
random
changes.
Maybe
button
changes
that
we
were
talking
about.
C
Right,
which
I
think
was
explored
for
this
original,
like
like
a
year
or
two
ago
when
the
I've
looked
through
the
issues
when,
like
we
were
like
really
trying
to
get
down
from
like
60
minute
pipelines
like
that
was
a
proposal
right.
But
then,
like
you,
you
smooth
out
the
the
time,
but
you
like
give
everyone
a
minimum
amount
of
time
that,
like
that
they're
changed
like
they'll,
never
get
a
10-minute
deploy,
they'll
always
get
a
four-hour
deploy.
So
like
it's
more
consistent.
C
But
then
everyone
gets
a
four
hour
deploy
and
like
you
could
do
exceptions
to
that
rule
but,
like
then
everyone's
an
exception,
and
then
you
get
the
variable,
deploys
between
ten
minutes
and
four
hours
right.
So
I
think,
like
with
the
volume
of
change
I
I
don't
think
we
should
do
that,
because
I
think
that,
like
it
won't
solve
the
solve
the
problems.
A
Cool,
do
you
want
to
go
over
the
action
items
or
just.
C
Yeah
I'll
vocalize
real
fast
that
we
should
do
blameless
root,
cause
analysis
for
what
happened
today
with
our
outage
with
npm's
outage.
I
like
made
a
comment
in
the
in
the
incident
issue.
People
can
hop
into.
I
think
that
we
got
struck
by
lightning.
C
I
think
that
this
was
like
a
perfect
storm
of
things,
and
I
don't
think
that
we're
that
we
have
a
ton
of
risk
there,
but
I
think
it's
basically
like
five
things
like
five
step
process,
which
is
mr
happened
past
its
pipeline
goes
to
master
pipeline
between
the
time
that
that
happens
and
master
pipeline
starts.
C
Npm
goes
down,
master
pipeline
is
all
running
already
running
and
is
unstoppable
running
independent
parallel
jobs,
some
of
those
parallel
jobs
build
like
pointing
to
assets
assuming
that
they're
going
to
have
assets
don't
get
built
because
npm
is
down
incremental,
deploys,
go
out
to
the
to
the
bucket
and
like
reference,
these
non-existent
style
sheets,
and
then
everything
breaks
so
like
that
wouldn't
have
happened
if
npm
had
gone
down
and
failed
on
that.
Mr,
like
30
seconds
sooner,
because
that
mr
would
have
failed
and
not
made
it
to
master
so
like.
C
C
So
that's
that's
my
thought
and
I
think
we
should
do
like
a
bigger
analysis
so
that
we
can
either
so
that
we
can
like
get
down
to
the
bottom
of
that
and
figure
out
if
it
makes
sense
to
to
do
anything
because
we
we
could
build
out
some
more
resiliency
and
we
could
build
out
snapshots
of
our
deployments
and
like
save
rolling
versions
of
the
website
across
google,
buckets
and
swap
over
as
needed,
but
like
that
would
be
a
lot
of
effort
and
like
we
already
have
a
ton
of
like
we're
already
extremely
resilient.
C
If
an
mr
fails,
it
never
gets
to
master
and
we're
always
just
defaulting
to
the
last
successful
master.
Build
like
this
is
always
the
case.
Whatever
got
out
is
like
there.
So
I
think
we
have
a
ton
of
resiliency
and
I
think
that
this
was
being
struck
by
lightning,
and
I
don't
know
if
it
would
be
worth
the
effort
to
to
do
the
other
stuff,
but
at
least
not
right
now,
right.
A
One
question
that
I
think
came
up
in
the
incident
channel
from
today
from
michael
frederick
said:
you
know:
do
we
have
monitoring
and
alerting
for
about
for
like
assets
or
css,
css
and
other
assets?
Are
we
able
to
set
up?
I
don't
I
don't
know
what
the
procedure
is.
I
assume
that
throughout
the
website,
there's
a
ton
of
404s
for
random
images
that
are
missing
and
whatever
is
it
worth
like
figuring
out,
which
ones
are
legit
yeah.
C
I
don't
know
I
mean
what
like
we
do,
so
we
do
have
monitoring
because
we
get
it
through
log
rocket.
We
get
it
through.
Google
analytics
like
we
know
where
we
get
404s
and
we
get
reports
like
hanef
often
hits
us
up
about
like
hey.
We
have
this
thing
missing
and
I
think,
like
it's
awesome
like
you
know,
people
are
on
top
of
it.
We
don't
have
alerts
the
way
of
like
outage
alerts,
because
I
think
exactly
what
you're
saying
is
like
like
one:
where
do
we
run
it?
C
It's
all
client-side
like
where
the
404s
happen
like
do
we
have
like
a
server
that
pulls
the
site?
You
know
every
so
often
and
like
goes
through
every
single
page
and
then
lets
us
know
when
it
hits
a
404
error
in
a
browser
that
is
running
and
then
what?
If
that's
like
a
local
failure
and
then
do
we
get
100
errors
that
one
computer
couldn't
get
it
stuff,
because,
like
of
a
network
error
on
its
end
and
everything
else
is
totally
fine
right,
I
don't
know.