►
From YouTube: Threat Management Staff Meeting 2021-03-09
Description
Topics discussed included community contributions, promotion, hiring, referrals, team building, and a recent outage
B
He
wanted,
he
won't
make
us
all
feel
bad
by
having
an
even
bigger
house.
So
he
just
took
a
picture
of
his
old
house.
C
A
C
Notice
that
there
are
at
least
two
community
contributions,
which
is
neat
as
well.
A
A
I
think
the
next
agenda
item
is
me
excited,
that's
not
the
one
I
was
excited
to
share
this
one
as
well
is
good,
though
we've
got
a
lot
of
hiring
going
on.
I
hope
everyone's
aware
of
that.
So
there
was
one
position
that
was
opening
up
for
back-end
engineer
with
insecure.
I
wanted
to
share,
I
know,
there's
a
couple
of
other
positions:
open
and
front-end
right
now
we're
interviewing
for
engineering
managers.
A
C
B
Yeah,
I'm
just
I
am
I've
been
so
honored
to
work
with
all
of
you
for
this
past
year,
and
I
look
forward
to
continuing
to
do
great
things
with
all
of
you.
C
It's
well
deserved,
alexander,
great
stuff,
and,
and
thanks
lindsay
for
advocating
to
make
it
happen,
it's
good
stuff.
So
last
item
on
the
agenda,
as
as
lindsay
mentioned
about
roles
for
the
fuzz
testing
group,
is
I'm
recruiting
for
a
senior.
C
C
Kind
of
role
than
we
normally
recruit
for
at
goodlab,
so
it's
a
seg
or
seg
single
engineer
group,
which
is
somebody
who
doesn't
want
to
or
need
to
collaborate
with
a
bigger
team.
They
kind
of
work
on
things
all
by
themselves
and
they
know
just
enough
of
product
management
just
enough
ux,
etc,
and
they
work
on
experimental
things
that
will
often
start
and
then
sometimes
stop.
C
So
it's
kind
of
do
work
on
big
bets
and
it's
gonna
be
reporting
to
me
at
least.
Currently
that's
the
plan.
There's
a
single
seg
reporting,
one
of
my
peers
to
work
on
application
performance
management.
This
is
actually
to
work
on
a
machine,
learning,
ops
or
ml
ops,
which
is
kind
of
neat.
C
I
just
started
recruiting
for
this
a
week
ago.
So
if
you
know
anyone,
that's
interested
or
might
be
interested,
let
me
know-
and
if
you
have
any
feedback
on
the
job
listings,
it's
very
different
than
what
we
normally
list
I'd
love
to
hear
it.
E
C
C
Definitely
do
the
referral
it.
It
means
less
to
the
hiring
manager
that
you
don't
know
them
directly,
but
it
doesn't
mean
nothing
either
than
if
it
was
a
direct
referral.
So
absolutely
refer
friends
or
friends.
A
E
I
had
two
examples
and
one
is
recently
there
was
a
full
stack
engineer
looking
for
a
gitlab
job,
but
it
was
a
friend
of
friend,
so
I
you
know
told
them
that
it's,
but
I
can
reach
them
out
and
then
tell
them
that
there
is
this
position
open
so
that
they
can
check
it's
good
to
know
thanks
wayne
yeah.
C
Good
stuff,
great
question:
I
wonder
if
we
in
the
wherever
the
referral
process
is
document
in
the
handbook.
I
wonder
if
we
should
make
that
clear.
E
A
And
while
we're
talking
about
positions
at
open,
I
also
wanted
to
make
sure
everyone
is
aware
of
the
fact
that
we're
hiring
for
another
backend
engineer
within
protect.
So
this
will
be
somebody
on
the
container
security
group
under
the
category.
I'm
not
sure
what
it's
called
that
will
be
focused
just
on
security
approvals.
So
this
is
another
single
engineer.
A
Team
group,
like
wayne,
was
explaining
before.
A
I'll
put
this
on
the
agenda.
I
was
just
ad
hoc
thinking
about
this.
I
scheduled
two
game
times
for
next
week.
So
take
a
look
at
the
threat
management
shared
calendar
there's
a
morning
or
a
pst
morning
and
pst
afternoon
to
try
and
accommodate
for
time
zones
we
can
do
droposaurus
or
puzzling
or
whatever
everyone
likes.
So
I
miss
y'all
since
our
last
teen
day.
A
I
I
think
alexander
for
this,
because
he
was
the
one
that
was
calling
out
that
none
of
us
have
given
the
puzzle
any
love.
Yesterday.
B
It's
still
in
its
infancy
of
completion
for
sure.
C
So
did
it?
Did
anyone
happen
to
look
into?
Of
course,
I
have
to
paste
the
actual
link
this
outage
that
happened
yesterday.
If
not
that's
totally
fine,
it
didn't
directly
involve
any
of
our
teams.
Well,
we've
got
a
little.
I
know
I
just
added
to
it.
It
just
came
my
attention
since
I'm
covering
this
week
for
my
manager
christopher.
My
job
includes
now
also
summarizing
outages
and
what
we're
doing
about
it,
etc.
Anybody
happen
to
take
a
look
at
this
quick
raise
of
hands.
You
get
involved,
no.
E
I
just
experienced
it.
It
was
this
morning.
C
Yeah
yeah
this
morning,
europe
time
yeah.
D
C
C
So
it
looks
like
large
number
of
expensive
database
queries
running
on
the
primary
caused
high
load
across
the
cluster.
It
started
saturating
the
connections
to
the
database,
which
caused
a
slowdown
across
the
platform.
At
certain
point,
no
requests
were
served.
We
did
emergency
maintenance
to
mitigate
the
load
increase
once
the
database
was
capable
of
serving
the
queued
requests.
The
problem
recovered
and
we
don't
know
at
least
at
the
time.
This
was
written,
the
cause
of
the
initial
slow
down
and
it
looks
like
then
later
the
root
causes
do
the
wrong
statistics.
C
In
the
name
spaces
table
there
was
an
auto
vacuum
on
the
name
spaces
table
just
before
the
incident
started.
That
and
auto
analyze
was
not
run.
It
was
last
run
two
days
ago
we
ran
an
explain
analyze
on
it,
and
now
it
went
from
20
to
150
milliseconds
when
it
was.
What
is
that
200
seconds,
198
000
milliseconds
before
the
analyze
that
looks
like
the
cause?
C
Knowing
that,
oh
it's
in
the
document,
alexander
the
link,
unless
I
pay
student
wrong,
which
I
may
have
done
yeah,
it's
number
eight
in
the
doc
which
I
just
put
in
as
we
were
talk
as
lindsay
was
talking
about
the
previous
item.
C
As
as
I
look
at
this,
and
I've
got
a
little
bit
of
head
start
last
discussion,
I
was
in
right
before
this
one
we
were
discussing
this
is
it.
It
looks
like
just
the
the
the
statistics
on
the
table
where,
on
the
indexes
got
out
of
date
and
caused
the
queries
to
take
a
really
long
time
to
run
and
that's
something
we
should
have
caught
or
not,
we
should
have.
We
can
do
a
better
job
of
catching
in
the
future.
When
that
occurs.
C
E
I
have
a
feedback
regarding
the
the
incident,
so
if
only
the
is
stable,
because
I
haven't
checked
the
issue
yet,
but
this
morning
the
gitlab
was
unreachable,
so
I
was
not
able
to
fetch
the
repository.
E
I
was
not
able
to
check
my
issues
page,
so
I
think
I'm
not
sure
how
the
back
end
or
how
the
the
system
is
is
configured,
but
a
problem
in
the
statistics
table
caused
a
major
outbreak
outage,
not
outbreak,
so
this
could
be
investigated
because
to
me,
for
instance,
fetching
from
the
repository
should
be
unrelated
to
the
statistic
stable
or
maybe
statistic
table
should
be:
I'm
not
sure
how
it's
it's.
It's
it's
a
it's
built,
but
it
should
be
like
asynchronous.
Probably
so,
if.
C
C
Good,
you
know
good,
I
like
the
way
you're
thinking
about
it.
One
thing
that's
been
discussed
is
partitioning
the
postgres
database
and
I
believe
it's
by
by
customer,
so
each
customer
has
its
own
instance.
What
that
would
make
is
if
one
customer
has
an
issue
it
won't
affect
all.
If
one
customer
is
creating
enough
of
a
problem
that
it
would
affect
all
customers
in
this
mode,
it
would
only
affect
them,
so
it
limits
the
number
of
customers.
That
would
be
impacted
that
doesn't
make
the
customer.
C
That
is
impacted
any
happier
right,
but
it
limits
the
number
of
impacted
in
things
like
this.
Perhaps
in
the
future
it
does
create
more
maintenance
work
like
how
do
you
make
like?
Let's
say
this
analyze
didn't
run
on
one
customer?
Would
we
know,
would
the
customer
have
to
report
it
to
us,
or
we
have
automated
monitoring
core,
but
it
might
have
limited
it
to
just
certain
customers.
So
there's
been
discussion
of
doing
that
sharding
by
customer
of
the
database,
not
sharded
by
functionality.
B
C
I
was
guessing
on
permissions,
I
I
I
don't
actually
know
I
I
see
it
often,
though,
so
I
think
it's
a
pre-key
table
used
in
many
places,
but
it
was
the
name
space.
As
I
read
this,
the
name
space,
the
problems
of
the
namespace
table
were
causing
queries
for
the
namespace
table
to
take
a
long
time
or
time
out
which
were
causing
other
queries
to
get
queued
up
by
the
database,
which
was
then
caused
a
problem
with
alt
accessing
any
table
because
it
caused
an
overall
database
issue.
C
B
B
C
The
all
right,
if
you
have
thoughts
on
this,
you
know
please
do
comment
on
the
issue.
If
you
have
the
time
and
interest
I
I
have
not
read
it
in
detail,
yet
I
just
read
the
summary
half
an
hour
ago,
but
would
love
to
see
other
people's
thoughts
on
it
now
or
in
the
future.
C
So
yeah,
so
if
we
had
the
option
to
shard
the
database,
so
split
it
by
customer
the
pros
and
cons
of
that
you
know
a
pro,
is
you
know
you
mentioned
you
localize
issues
to
specific
customers.
If
there's
an
issue
with
one
customer
that
might
affect
all
would
not
affect
all
it'll
affect
one
a
negative
is,
is
you
know
the
code
needs
to
know
about
the
different
shards?
C
Are
you
connecting
to
this
instance
of
the
database
or
that
instance
of
the
database,
unless
you
put
some
kind
of
middle
layer
that
that
extrapolates
that
out?
And
if
you
want
to
query
across
databases,
join
across
databases
that
that
can
be
a
real
challenge
as
well,
which
we
don't
do
that
much
joining
across?
But
we
do
some.
B
A
more
a
more
expensive
change
we
could
do
is
to
analyze
the
namespace
table
to
see
if
it
could
be
broken
up
into
other
tables.
Maybe
the
namespace
table
is
a
product
of
legacy
and
we've
like
thrown
too
much
in
there,
and
now
it's
become
a
too
heavily
relied
on
the
table.
So
we
could
look.
C
B
Yeah
we
you
mentioned
indexing,
I'm
surprised
there
wasn't
any
some
sort
of
alert
about
that.
C
Yeah
I
haven't
read
through
the
details
it
might
have
been.
There
was
an
alert
and
it
was
not.
We
don't
have
a
way
to
escalate
that
or
or
it's
not
escalated
to
people
or
yeah.
I
don't
know
the
details,
always
with
things
like
this.
It's
a
failure
in
process,
not
a
failure
in
people
right.
It's
some
process
in
in
the
spirit
of
a
you,
know,
blameless,
root,
cause
analysis,
some
process
or
set
of
processes
that
computers
and
people
depend
on
failed
us,
and
we
can
improve
that
process.
C
It's
not
that
the
people
made
a
mistake.
We
need
to
make
it
easier
for
the
people
to
succeed
and
for
the
systems
to
succeed
as
well,
without
needing
to
rely
on
people
and
again
I
don't
know
the
details
underneath
this
or
any
of
the
history
on
it.
C
C
C
I
don't
know
this
person,
but
somebody
I
know
and
trust
does
know
them,
so
it
would
be
less
confidently
so,
but
we
definitely
have
a
more
incoming
sorry
outgoing
recruiting
process
and
incoming
is
we
look
for
we
look
to
find
people
that
might
be
a
good
fit
for
gitlab
versus
listing
lots
of
jobs
in
lots
of
places.
So
I'd
still
think.
I
think
it's
worth
validating
this
in
the
handbook.
E
So
maybe
I
should
create
an
mri
which
say:
okay,
awesome.
C
That'd
be
great,
thank
you.
Thank
you.
So
much
and
again
it
was
a
great
question.
You
asked
because
you
know
one
person
asks
it
probably
means
50
people
have
thought
about
it
before
and
just
so
it's
great
to
get
the
handbook
updated
too.
C
Not
that
you
or
I
or
lindsay
are
the
are
amongst
the
dris
for
accepting
that
change,
but
I'm
sure
we'd
be
able
to
look
it
over
we'll,
send
it
to
the
dri,
which
is
probably
somebody
on
the
recruiting
team,
to
review
it
and
decide
if
they're
going
to
emerge
or.
B
C
Excellent,
oh,
I
guess
we
have
a
last.
We
have
a
new
person
whose
name
we're
not
going
to
say
because
we
don't
know
if
they've
given
notice
their
current
employer,
starting
they
accepted
a
while
ago,
but
they
couldn't
start
until
it's
either
mid-march
or
mid-april
on
thiago's
team.
If
somebody
knows
better
than
me,
it'd
be
great,
but
yeah.
Somebody
who
accepted
back
in
january
is
starting
in
the
next
month,
which
is
neat.
A
D
E
D
Experience
with
it's
a
past
contributor,
I
think,
and
he
has
experience
with
kubernetes
yep.
C
Oh
thank
thank
thanks.
Amir
sebastian
and
yeah.
This
person
is
also
a
former
or
current
community
contributor,
which
is
kind
of
neat
and
happens
to
be
based
in
india.
I
remember
correctly
so
even
more
time
zones
yay
more
diversity,
so
awesome
all
right
thanks.
Everybody
have
a
great.