►
Description
Accomplishments, pairing, OKRs, team day, etc
A
Fair
management
staff
meeting
today
all
right,
well
I've
quite
a
few
accomplishments
since
the
last
meeting
it
goes
into
fff.
What's
that
50
60
different
things
since
the
last
meeting,
that's
pretty
cool.
B
C
A
E
And
then,
just
five
minutes
ago
we
finished
the
first
programming
session
with
jonathan.
We
also
had
the
recording
and
then
we
will
probably
share
it.
A
A
Or
youtube
I
mean
whatever
you
decide
just
whatever,
if
we
can
be
put
on
youtube
great,
if
you're
not
comfortable
with
that,
that's
that's
totally
fine
too.
F
C
A
And
yeah
totally
totally
totally
understand.
I'm
glad
we're
doing
it,
though
I
think
it's
hard
to
do
pairing
sessions
with
everybody.
You
know
spread
out
across
the
world,
but
when
people
can
when
they
have
overlapping
work
hours-
and
they
want
to-
I
think,
a
couple-
different
teams
do
it.
I
get
lab.
You
know
many
companies,
do
it,
but
git
lab
that's
not
as
common
just
because
we're
so
distributed.
So
when
we
have
opportunities
doing
people
want
to.
I
think
I
think
that's
neat
the
growth
team.
A
Neat
alexander,
I
think
you
were
saying
you
wanted
to
do
some
pairing
sessions
too.
Did
I
see
that.
C
A
A
Thanks
so
much
okay,
so
q4
ended
at
the
end
of
january
they're,
not
on
account
they're,
not
our
quarters
in
our
calendar
quarters,
but
fit
our
own
fiscal
quarters.
So
I'm
just
going
to
I'm
going
to
quickly
go
through
what
where
we
landed
on
q4
and
then
what
we
have
planned
for
q1.
A
You
can
see
the
links
in
the
doc
so
yeah
the
links
to
the
issues
in
the
doc,
and
then
I'm
going
to
have
this
breakout
into
breakout
groups
and
ask
a
couple
questions
and
then
maybe
the
breakout
groups
come
back
and
report
so
q4.
How
do
we
do?
We
had
a
goal
around
increasing
product,
dog,
fooding
and
performance,
and
we
did
on
some
of
this.
A
We
did
really
well
like
merging
pajamas
changes
where
we
did
quite
a
few
more
than
what
we
expected
and
others
we
didn't
get
as
much
time
to
work
on
it.
Wasn't
that
we
didn't
do
well
on
them.
It's
that
we
didn't.
We
were
very
optimistic,
overly
optimistic
on
the
time
we'd
have
to
put
into
them,
so
around
dom
fluting
of
secure
scanners.
We
did
a
good
bit
of
that
continue,
but
not
the
whole
is.
Are
folks
able
to
hear
me.
I
say
alexander
can't
yeah.
A
I
think
it's
just
you
alexander,
so,
and
also
making
sure
the
current
management
pages
have
a
longest
contextual
paint
of
less
than
two
and
a
half
seconds
2.5
seconds.
So
we
got
some
of
that
done,
but
not
all,
but
overall
good
stuff,
increasing
team
velocity
didn't
go
as
well
as
we
wanted.
We
were
over
ambitious
on
the
number
of
contributions
that
leaders
make
especially
me
when
I
took
over
the
growth
team.
A
I
basically
gave
up
on
this
for
myself
and
we
didn't
really
take
into
account
vacation
time,
which
we
should
have
so
that's.
That's
was
one
that
lindsey
and
thiago
and
I
in
particular
worked
on
ourselves
and
narrowing.
Our
rate
is
not
where
we
wanted
to
be
compared
to
all
venture.
We've
got
some
ideas
on
improving
that
continuing
to
improve
that
next
quarter
and.
B
A
A
It
was
the
rate
of
increase,
it
actually
decreased
a
bit,
but
we
also
didn't
take
into
account
vacation
time
either.
You
know,
we
know
that
for
a
year
from
now
for
nine
months
from
now
to
pay
more
attention
so
but
overall,
overall
great
stuff
and
we're
supposed
to
set
the
balls.
Okay,
okay,
if
you're
supposed
to
be
saying
the
goal's
really
high,
if
you
meet
all
of
them,
you
didn't
set
good
okrs.
They
should
be
aspirational.
Some
and
then
last
set.
Is
that
taking
actions
to
improve
team
cohesiveness?
A
We
did
almost
all
of
these,
which
is
great,
so
feature
flag,
training,
onboarding
issues
for
new
folks
to
the
team,
new
engineers
contributing
towards
the
product,
which
is
great
coffee,
chats
across
team
members,
the
team
contract
agreement,
one
where
I
think
we
we
pulled
that
back
a
bit
and
that's.
B
To
add
largely
because
the
handbook
covers
a
lot
of
things,
and
I
think
you
usually
talk
about
with
a
team
in
a
team
agreement,
so
when
tiago
and
I
were
sitting
down
trying
to
figure
out
what
that
would
look
like,
we
decided
it
would
just
be
a
bunch
of
links
to
handbook
pages.
I
think
there's
still
some
benefit
that
we
could
have
out
of
some
like
getting
to
know
each
other
type
team
user
guides,
but
that's
different
yep.
A
And
then
the
last
one,
oh,
we
did
all
the
performance
reviews
on
time
and
on
budget.
So
that's
q4.
It's
related
on
q4,
definitely
a
good
q4
and
then
q1.
Second,
that's
in
4b
in
our
agenda
click
on
that,
so
it
always
looks
great
at
the
beginning
of
the
quarter.
We
can
do
all
these
things,
we're
alright
and
then
you
know,
as
you
know,
as
the
quarter
progresses
on
they
don't
it.
A
You
know
real
life
hits
compared
to
what
our
plans
were,
but
that's,
okay,
so,
but
we've
got
lots
of
good
stuff
planned
and
we
have
combined
okrs
here
for
threat
management
and
growth.
So
if
it
says
growth
specific,
you
can
ignore
it.
Here,
that's
the
growth
team,
but
some
of
them
are
uncommon
between
the
teams.
A
So
the
things
we're
planning
to
do
is
every
team
member
has
one
career
development
conversation
with
their
manager
that
came
out
of
the
culture
amp
survey
that
we
are
not
doing
enough
of
those
kinds
of
discussions
and
then
also
every
people
leader
in
threat,
management
and
growth,
have
one
culture
amp
acting,
which
does
not
include
the
one
above
one
additional
one.
Where
you
know
lindsay's
going
to
look
at
across
her
team.
I'm
going
to
look
at
you
know
across
the
whole
team.
A
Thiago
is
going
to
look
across
his
based
on
the
feedback
and
decide
what
action
to
take
and
let
know
the
team.
Let
me
know
what
that
is:
I'm
increasing
performance
and
usability
lindsay's
team
is
going
to
continue
to
work
on
pajamas
conversions,
that's
great
for
performance
and
also
great
for
team
velocity
see
how
this
team
is
going
to
do
kind
of
something
similar
and
cherry
pick.
Small
changes
in
addition
to
the
the
normal
stuff
that
we
do
is
adding
missing
graphql
documentation
where
those
are
great
small
things
for
people
to
work
on.
A
In
addition
to
other
things
to
you
know,
continue
to
iterate.
Well,
the
lcp
one's
gonna
connect,
gonna
continue
and
I'm
also
working
on
a
performance
dashboard,
a
different
across
all
of
engineering.
That
will
help
people
understand
performance
and
become
aware
of
it
in
different
ways
than
what
we
measure
today
and
report
on
today
lindsay.
Why
don't
you
take
the
the
quality
ones,
even
both
both
yours
and
tiago's,
and
also
the
div
training
ones?
So
people
aren't
just
listening
to
me.
Blather.
B
Regarding
quality,
you
know
it's
been
both
on
the
threat,
insight
side
and
the
container
security
side,
something
that
we've
been
a
little
hands-off
about
within
threat
insights.
We
have
will
meek
who's,
our
dedicated
sct,
so
he's
very
knowledgeable
and
you
know
active
and
as
far
as
container
security
goes,
we
don't
have
anyone
in
sct.
So
we
want
to
both
fill
that
gap
in
container
security
and
put
together.
B
You
know
an
end,
a
set
of
end-to-end
tests
that
at
least
exercise
you
know
the
golden
path
and
get
us
started
to
building
that
test
framework
for
that
group
and
then
on
the
thread
inside
side.
We
want
to
understand
what
will's
process
is
and
start
to
become
more
involved
in
it
and
even
suggest
some
improvements.
So
that's
what
these
two
krs
represent
and
you
can
see
they're
labeled
accordingly
and
then,
when
you
also
said
the
the
product
quality
or
the
performance.
A
The
the
lcp
and
well
and
also
just
wrenches
everybody's,
going
to
everybody
in
threat
management,
is
going
to
do
the
dib
training
as
well
is
the
goal,
diversity
yeah,
and
if
someone
did
the
previous
one
that
the
company
was
planning,
it's
it's
optimal
to
know
the
old
one
was
was
really
long.
A
Actually,
most
of
the
growth
team
did
it
and
the
feedback
was.
It
was
too
long,
this
new
one.
It
took
me
about
an
hour
and
a
half
to
do,
and
but
I
was
relatively
familiar
with
a
lot
of
the
handbook
pages
that
it
referenced,
so
it
might
check
others,
I'm
guessing
a
little
bit
longer,
but
overall
is
a
good
experience.
So
that's
the
other
goal.
So
on
all
of
these
I'd
like
to
do
breakout
books
and
the
questions
are
what
do
you
like
about
them?
What
do
you
not
like?
A
How
should
we
improve
them?
So,
basically,
is
you
know
how
do
we,
what
are
your
overall
impressions?
And
what
do
you
like?
What
do
you
not
like,
and
how
should
we
improve
them?
So
I'm
going
to
do
breakout
sessions
here
and
it's
going
to
pause.
The
recording,
as
I
do
that
is
that,
is
that
weird
on
the
recording
hold
on
a
second.
A
A
All
right,
everybody
has
unceremoniously
been
thrown
back
into
the
main
room
in
a
recording
too.
So
hopefully,
this
breakout
room
feature
way
of
doing
things
worked
well.
So
who
wants
to
speak
for
each
room?
What's
kind
of
the
summary
of
what
you
like,
what
you
don't
like
and
what
we
should
improve
on
the.
G
So
we
discussed
about
the
the
current
occurs
like
q1
and
mehmet
and
alexander
and
me
were
talking
about
the
100
threat.
Management
pages
have
an
lcp
of
2.5
second,
so
we
were
thinking
like
it
it
it
could
be
like
higher.
This
is
the
percentage
maybe
like
we
can
set
95
percent
or
90
percent,
or
something
like
that.
G
So
if
we
achieve
something
nearer
the
lcp
we
can
say
at
the
end
of
the
who
cares
that
we
achieved
somewhat
like
we
didn't
achieve
100,
but
we
achieved
like
90
or
95
percent,
or
something
like
that
did
I
did.
I
miss
anything
mammoth
or
alexander.
Just
you
can
add.
E
So
the
main
idea
is
actually
like
when
it
comes
to
like
performance
measurements.
We
cannot
talk
about
100.
Never
we
can
we
can.
We
can
talk
about
it.
Even
if
we
like
write
the
fastest
program,
there
can
be
some
error
rate,
leading
that,
like
99
percent,
nine,
nine,
like
0.999
of
time
like
it,
runs
under
a
certain
amount,
but
just
one
request
like
exceeds
the
set
target.
Then
you
say
like
we
didn't
reach
the
target,
therefore
like
in
the
in
the
performance
measurement.
E
We
have
other
other
like
stuff
like
95,
percentile
or
90
time
under
that
target.
So
I
think
it's
better
saying
like
90
percent
95
percentile
under
205
2.5
seconds,
and
if
we
reach
the
target
of
saying,
like
95
percentile
of
the
time
under
three
minutes
three
seconds.
Sorry,
then
we
say
like
hey:
we
should
target
80
percent.
B
I'm
curious
about
that
versus
setting
it
as
an
average
right.
I
understand
there's
fluctuations
and
there's
peaks
and
valleys,
but
would
you
would
you
feel
it's
reasonable
to
say
that
we
have
an
average
of
2.5
seconds.
E
We
can
also
say,
like
average
but
like
if,
if
we
say
average
is
under
2.5
seconds,
then
like
it's
just
average
right,
because
one
request
can
take
like
100
milliseconds
and
the
other
request
takes
like
10
seconds
and
the
user
experience
for
that
page
is
terrible,
but
the
other
page
is
so
fast.
So
it's
making
it
like
how
you
say
it's.
It's
gonna
mislead
us.
Probably
it's
gonna
misguide
us!
E
So
the
like
commenting
that
in
in
this,
like
performance
tools
as
well
like
they
are
giving
us
the
like
the
measurement
for
95
percentile.
A
You
take
out
there
so
in
the
chat
you
can.
Maybe
we
should
look
at
the
median
rather
than
the
mean.
C
Yeah,
because
that
will
won't
be
affected
by
outliers.
So
much.
A
B
B
F
B
Was
actually
referring
to
both
ideas,
I
think
my
matt
was
suggesting
something
around
looking
at
a
percentage
target.
You
know
looking
at
a
95
goal,
at
least
across
our
pages
versus
you
know,
potentially
looking
at
looking
at
a
median
average.
I
think
those
are
both
things
that
we
could
discuss
and
I
think
the
issue
is
the
right
place
to
discuss
them
all
right.
F
I'm
adding
something
I'm
typing,
but
I
will
say
I
do
like
I
do
like
we.
You
know
we
talked
some
about
the
okrs.
Just
like
you
know.
We
tend
to
focus
on
them
when
we're
actually
going
through
them
as
a
group
and
sometimes
in
our
individual
meetings
with
our
managers,
and
we
don't
always
like
they're,
not
always
like
for
front
of
our
mind.
It's
like
oh
yeah,
that's
right.
F
We
have
okrs,
but
you
know
I
think,
thiago
and
lindsay
do
a
great
job
like
reminding
us
what
the
okrs
are
in
our
individual,
one-on-ones
and
and
kind
of
help
us
move
through.
That
way.
F
I
for
one,
do
like
the
the
the
graphql
documentation,
because
I've
done
a
couple
of
them
already
and
it
it.
You
know
I
get
my
hands
in
the
other
parts
of
the
code
that
aren't
threat
management
threatening
sites
to
because
I
did
something
with
the
mrs
and
the
issue
of
old
things,
but
it's
it
kind
of
forces
you
to
look
at
other
parts
of
the
code
when
you're
doing
these
as
well.
A
So
I
know
we're
over
on
schedule
time,
but
lindsay
and
alexander.
You
want
to
quickly
mention
the
last
two
items.
B
Yes,
I
will
reshare
it,
but
I
did
share
a
survey
I
mentioned
earlier
that
our
retrospective
was
a
lot
about
pairing
and
this
was
asking
about
how
often
you've
paired
and
I've
only
as
of
last
week
only
got
one
response.
So
there's
a
link
here,
please
take
it'll.
Take
you
one
minute
to
fill
out
that
survey.
For
me.
Thank.
C
A
Oh
and
I
didn't
set
it
up
by
the
way
it
was,
I
was
just
one
of
the
participants
lindsay
and
gila,
and
a
bunch
of
other
folks
ended
up,
but
thank
you
subscribers.
Do
you
like
the
band
practice,
the
most.