►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
at
the
risk
of
having
everyone
hear
too
much
from
me
today.
I
do
have
a
few
items
just
to
review
real
briefly,
so
I
guess
to
start
off,
we
should
probably
just
close
out
the
discussion
on
that.
We
started
in
slack
about
enabling
the
configuration's
feature
flag.
I
think
just
so
we're
all.
B
It's
fairly
small,
I
think
zombie,
have
you
rolled
out
any
feature
flags
before
using.
B
Yeah
there's
also,
I
think
I
think
the
intention
of
this
one
was
to
roll
out
on
on
code
flip
it
on
code,
but
but
sam,
I,
I
believe
is,
is
small
work
and
given
that
arthur
is
waiting
a
lot
on
on
his
message
requests,
I
I
think
he
would
have
time
to
to
do
that
in
between
so
I'll
say
a
tentative
yes.
A
Okay
sounds
good
yeah
if
it's
really
small
and
he
has
time
then
yeah,
let's
go
ahead
and
just
do
that
at
this
point.
A
Okay,
so
the
next
item
is
is
mostly
just
an
fyi
and
a
request,
we're
running
a
survey
on
alert
or
alert
dashboard,
and
we
have
some
good
responses
there.
But
if
you
are
comfortable
sharing
on
linkedin
or
twitter,
we
would
appreciate
the
extra
shares
just
to
get
a
few
additional
responses.
I
know
I
shared
it
on
my
profile
so
far,
it's
really
good
stuff.
Coming
back.
A
And
then
for
item
number
c,
so
we're
actually
getting
ready
to
do
two
solution
validations
and
for
those
of
you
who
don't
know
a
solution,
validation
is
where
it's
a
little
bit
more
than
a
survey.
We
actually
sit
down
and
spend
usually
it's
about
an
hour
with
individuals
and
we
walk
through
prototypes
or
mocks
and
actually
do
more
of
like
a
usability
study.
A
So
we,
you
know
we'll,
have
a
prototype
where
it'll
try
to
click
through
things
or
we'll
ask
questions
just
to
make
sure
that
the
designs
are
intuitive
and
easy
to
understand
and
that
we've
overcome
any
of
the
big
usability
obstacles.
So
we're
planning
to
do
two
of
those
one
for
the
alert,
dashboard
and
then
also
another
one
for
policy
management.
A
Just
in
the
spirit
of
transparency,
I
wanted
to
make
sure
this
group
got
a
chance
to
see
some
of
the
early
designs
that
that
kyle
has
been
putting
together
for
us.
I
know
our
initial
mvc
that
we
talked
about.
Doesn't
look
anything
like
this.
It's
that
list
view
and
currently
the
plan
is
to
keep
it
like
that.
A
They
haven't
looked
at
it
yet
they'll
be
able
to
drag
and
drop
over
to
the
interview
state
while
they
investigate
it.
Once
they've
confirmed
that
it
is
indeed,
you
know
a
something
worth
taking
action
for
actual
as
a
breach
or
a
you
know
an
incident
really.
Then
they
moved
it
over
into
the
confirmed
state
and
from
there
they
would
be
able
to
close
it
out
to
resolve
it.
You
could
also
dismiss
it
anywhere
along
this
point
or
this
workflow.
A
One
of
the
reasons
we
wanted
to
get
this
in
front
of
the
engineering
team
early
is
because
everything
else
has
really
great
overlap
with
the
rest
of
the
monitor
team.
So
you
know
the
other
alert
ui
that
exists
in
gitlab.
We're
planning
to
pretty
much
mirror
that
almost
exactly,
but
this
layout
in
this
workflow
is
a
little
bit
different.
A
I've
talked
with
sarah
waldner
who's,
the
product
manager
for
that
group
and
all
of
our
use
cases
align
up
almost
perfectly
with
you
know,
just
a
few
minor
exceptions.
You
know,
and
most
importantly,
the
persona
between
us
is
different.
A
A
That
would
be
ideal.
I
don't
know
if
going
with
the
current
mvc
route,
though,
where
we
have
more
of
that
list
view
is
going
to
be
too
much
throwaway
work,
so
to
speak
if
we
go
changing
the
ui
to
this
later,
but
I've
probably
talked
more
than
enough
here.
I'd
love
to
get
the
rest
of
the
group's
thoughts
on
on
these
mocks.
In
this.
B
So
a
couple
of
things
one
is
something
we
learned
in
in
vulnerability
management
is
that
we
could
have
done
a
better
job
explaining
to
users
how
we
expect
them
to
use
the
the
features
so
the
high
level
life
cycle
management
for
vulnerabilities,
the
features
are
all
there,
but
we
haven't
documented
in
a
way
say:
here's
how
we
recommend
you
use
this.
This
is
how
we
see
you
using
this.
So
I
wonder
if
there's
an
opportunity
to
do
that
with
with
alerts
as
well,
so
we
don't
make
the
same
mistake.
B
That's
one
point.
The
second
point
that
I
was
just
writing
down
is
that
I
I
like
the
kanban
approach.
It
looks
nice
it,
but
it
is
a
little
bit
of
a
deviation
there
and
I
I
was
wondering
to
myself
what
sort
of
research
we've
we've
done
there
and
those
are
my
two
comments.
A
Yeah,
so
the
research
is
is
coming
right.
That's
that
solution,
validation
that
we're
kicking
off
now,
so
we
do
plan
to
validate
that
before
we
go
out
and
actually
implement
it
to
come
back
to
your
first
question
of
you
know,
helping
them
understand
the
workflow
that
we
have
here.
You
know
that
is,
let's
see
if
I've
got
another
mock
in
here
somewhere
yeah.
I
don't.
These
are
kyle's
mocks,
so
sometimes
it's
hard
for
me
to
navigate
them,
but
originally
you
know
the
design
was
basically
just
well.
A
Actually,
you
know
what
I
can
pull
up
here.
Originally,
the
design
was
more
similar
to
the
workflow
that
is
had
by
the
monitor
team,
where
it's
just
a
list
view,
and
you
have
a
drop
down
by
each
of
the
rows
that
lets.
You
pick
the
status
and
that's
exactly
what
came
up.
You
know
as
a
concern
as
we
started
to
discuss
this.
A
A
As
you
said
that
thiago
I
thought
you
know,
maybe
we
could
put
just
like
a
little.
You
know
hoover
info
icon,
tooltip
here
to
help
explain
like
the
definition
of
mistakes
and
statuses
too,
potentially
even
down
the
road
as
we
get
more
towards
lovable.
We
might
want
to
let
them
customize
the
statuses
and
you
know
design
their
own
workflows
so
to
speak.
But
you
know
we
certainly
want
to
have
a
solid
one
out
of
the
box.
That's
going
to
meet
most
of
our
users
needs,
but
yeah.
Thank
you
for
that
feedback.
B
My
pleasure,
so
I
this
this
is
probably
part
of
the
you
mentioned
it's
an
early
design,
so
this
this
is
all
still
going
through
a
solution,
validation.
A
Yes,
yep.
We
are
starting
to
gear
up
to
kick
that
off
now,
so
it's
kind
of
a
process
it
takes
honestly.
It
takes
a
few
months
to
get
through
everything,
because
you
need
to
put
together
your
prototypes.
You
need
to
put
together
the
questions
that
you
want
to
ask
the
users
and
design
kind
of
a
script,
and
then
you
have
to
recruit
users,
and
then
you
actually
meet
with
them,
and
then
you
synthesize
the
results
so
that
whole
thing
takes.
A
You
know
roughly
two
to
three
months,
but
I
think
that's
enough
time
that
we're
not
going
to
be
to
this
point
in
engineering
before
we
can
finish
that
solution.
Validation.
That's
great!
Thank
you
very
much,
yeah!
Absolutely
in
that
same
note,
talking
about
alerts
as
we
do
start
work
there
thiago
you
might
want
to
reach
out
to
the
engineering
manager
counterpart
on
the
monitor
team
and
see
when
I
had
that
think
of
call
with
sarah,
it
sounded
like
they
already
have
an
alert
object.
They
already
have
tables.
A
In
fact,
you
know
they
have
the
ability
to
add
alerts
into
incidents
which
is
where
we'll
want
to
go
down
the
road.
Later
on
you
know,
assigning
alerts
or
incidents
to
users
is
already
there,
and
that
was
like
a
huge
effort.
They've
already
done
a
ton
of
work,
I'm
really
hoping
that
we
can
just
reuse
the
majority
of
their
work
and
create
a
new
page
or
a
new
view
into
that
data.
B
It's
it's
a
great
great
suggestion
and
thank
you
for
reminding
me
I'll
I'll
reach
out
I'll
I'll
start
I'll
just
reach
out
to
the
em
and
at
some
point
I
imagine
it'll
be
useful
for
for
the
engineers
as
well,
samir
and
and
maybe
arthur
to
catch
up
with
that
team.
On
on
more
of
the
details,
but
yeah
good
idea.
A
Yeah,
for
sure
I
mean
you
know,
I
I
don't
know
everything
about
the
architecture,
but
from
a
use
case
perspective
I
mean
the
overlap
between
us
was
probably
you
know,
95
plus,
and
so
you
know
there
were
very
minor
deviations
here
and
there,
but
almost
everything
that
they
felt
are
things
that
we'll
want,
and
you
know
future
things
on
their
roadmap
are
also
things
that
we
want
to
add.
So
you
know
it'll
be
great.
If
we
can
collaborate.
A
B
That
and
and
make
sure
we
don't
end
up
in
a
situation
like
we
have
with
with
orchestration
or
configure
where
there's
a
high
degree
of
dependency
or
or
yeah,
and
and
we
we're
not
exactly
aligned
in
that
roadmap-
we're
hitting
some
blocks
in
there.
So
if,
if
we
do
jump
on
a
collaboration
with
monitoring,
I
want
to
make
sure
that
we
we
we're
not
going
to
paint
ourselves
in
a
corner.
A
Yeah
that
sounds
great,
and
then
this
last
item
here
that
I
have
is
also
more
of
just
an
fyi,
but
I
continue
to
meet
with
customers,
and
I
continue
to
hear
a
strong
ask
for
the
ability
to
actively
scan
for
vulnerabilities
in
production
and
in
that
again
the
number
one
ask
is
scanning
installed
packages
and
dependencies
and
comparing
those
against
a
known
vulnerability
list.
A
A
You
know
so
I
I
take
it
with
a
little
bit
of
a
grain
of
salt,
because
you
know
it's
being
requested
by
the
user
group
of
where
we're
at
now,
which
means
we're
not
necessarily
developing
features
for
the
group
where
we
want
to
get
to.
But
at
the
same
time,
like
you
know,
we
kind
of
have
to
balance
that
we
need
to
address
both
groups
and
that
message
is
coming
loud
and
clear
from
our
devops
users.
A
So
again,
I
know
we
discussed
that
last
time,
but
just
wanted
to
reiterate
that
this
week
again,
since
I
continue
to
hear
that
feedback.
A
Or
yes,
but
the
challenge
is
that
most
customers
have
legacy
applications
that
have
been
running
in
production
for
two
to
three
years,
maybe
even
more
they're
a
little
bit
embarrassed
in
how
long
they've
been
out
there
and
they
want
to
get
an
accurate
picture
of
the
vulnerabilities
as
they
exist
today
in
production.
B
I
was
about
to
write
something
I'll
write
it
down
after
I
ask
sam
is
that
is
there
any
date
on
the
exact
understanding
from
from
discussion
in
terms
of
scanning?
Remember
when,
when
you
first
mentioned,
I
threw
through
out
there
that,
maybe
maybe
that
could
mean
port
scanning.
A
A
You
know,
they're
just
looking
for
a
basic
like
security
installed
packages,
and
here
are
the
known
vulnerabilities
in
those
packages,
and
if
we
can
take
that
one
step
further
and
look
at
like
dependencies
as
well,
so
you
know
not
just
the
packages
on
the
you
know
in
the
host
or
in
the
you
know
that
you
would
get
by
like
listing
out
from
after
yum
or
or
something
like
that,
but
like
they're,
using
npm
or
yarn,
or
some
other
package
manager
like
that,
we
can
look
at
those
dependencies
as
well
and
give
a
report
on
known
vulnerabilities
and
those
dependencies
too.
D
Wayne
over
to
you,
so
I
want
to
thank
zamir
again
and
everybody
else
who
helped
arthur
helped
a
bit
as
well
on
the
demo
environment
for
the
container
security
demo
that
we're
doing
tomorrow
we're
recording
tomorrow
for
the
customer
conference.
So
we
we
were
troubleshooting
even
as
recently
as
this
morning.
We
didn't
expect
it
to
be
a
dog
fooding
exercise
of
how
to
install
and
configure
everything,
but
that's
what
it
turned
into.
So
we
learned
a
lot
from
it
and
we're
excited
so
good
stuff.
B
D
That
was
definitely
part
of
it,
other
learning.
So
the
reason
why
we
we
chose
azure
is
the
user
of
git
lab
that
we're
co-presenting
with
that's
what
he
does
from.
I
know
you're
a
consultant
nico
yeah.
So
we
did.
We
didn't
want
to
change
that
the
others
it
it's
still
harder
than
it
should
be
to
install
and
configure
which,
which,
which
I
know
we're
gonna
we're
planning
to
address
with
documentation
and
more
and
also
it's.
D
It's
not
easy
to
learn
some
of
the
open
source
projects,
we've
integrated
like
the
basics,
so
we
may
add
some
of
that
to
our
documentation
as
well
like
what
it
like
have
a-
and
I
know
we
have
some
of
this
already
planned,
but
things
like
have
a
base
policy
for
a
network
that
doesn't
do
much
of
anything
but
except
allow
you
to
confirm
forks
kind
of
thing
and
maybe
the
same
for
host
and
the
same
for
waff,
etc.
So
so
we'll
see.
D
But
overall
it
was
really
neat
to
launch
an
attack
which
I'm
playing
the
persona
of
the
of
the
attacker
and
then
see
felipe
catch
me
in
doing
it
and
then
configure
things
to
blocking
mode
and
no
longer
see
my
tax
be
successful
with
an
application
that
nico
created
and
then
we
also
did
a
sas
scan
of
it
and
little
mold.
The
sas
scan
found
the
bug
that
nico
left
in
the
code
intentionally
that
allowed
it.
So
it's
kind
of
coming
all
together.
So
overall
good
stuff,
that's
great!
D
Looking
forward
to
see
it
keep
us
posted.
Can
we
watch
you
live?
You
can
watch
yes,
ish!
It's
it's
they're
not
doing
any
of
the
conference
live,
but
you
can
register
it's
mid
august
and
it's
not
middle.
D
It's
in
a
couple
weeks:
they're
broadcasting
everything
live
for
the
well
they're
broadcasting,
all
the
recordings
which
are
doing
professionally
at
specific
times,
and
then
the
presenters
are
available
for
chat
online
and
then
also
sorry
at
that
time
and
then
offline
as
well,
and
I
think
they're
gonna
make
them
put
them
on
youtube.
You
know
later,
but
it's
a
free
conference
for
everyone
which
is
neat
both
you
know,
gitlab
team
members
and
any
anybody
customer
or
just
general
public.
D
So
it's
a
neat
stuff
looking
forward
to
it
and
thanks
that
you
know
actually
reminds
me.
As
I
know,
we've
got
quote-unquote.
I
don't
know
what
we're
calling
it
booth
duty.
It's
not
booth
duty.
It's
a
question
answer
to
you,
so
you
know
I'm
taking
some
shifts
there
for
securing
defense.
Tiago
you
are
arthur.
D
Allen
is
as
well
and-
and
I
think
sam
you're-
probably
taking
some
of
those
as
well
to
answer
questions
so
that'll
be
fun
as
well,
so
I'll
add
a
link
if
you
haven't
registered
for
the
conference.
Please
do.
B
Yeah
sure
so
the
the
the
short
story
there
is
that
the
implementation
that
arthur
has
has
put
forth
works
and
it's
well
tested.
But
it's
a
pattern
that
the
maintainers
are
very
concerned
about
incorporating
into
into
the
into
the
to
the
code
and
that
that's
their
decision.
We
we
need
to
support
them,
so
unfortunately
he
will
have
to
refactor
that,
and
that
puts
the
the
whole
thing
at
risk
because
of
that
the
the
learnings
we've
taken
from
it.
B
B
Oh,
I
see
why
you've
done
it
this
way,
but
still
at
the
end
of
the
day
they
went
yeah.
It
makes
total
sense
why
you
did
it
this
way,
but
it's
still
a
a
bad
idea,
because
it
could
create
a
maintenance
burden
or
or
a
problem
in
the
future.
A
Think
and
the
mary
looks
like
you
had
a
few
things
that
came
up
too.
I
know
you
have
the.
I
know
you,
you
helped
a
lot
with
the
demo
with
philippe.
C
Yeah
and
and
also
like,
I
did
similar
to
what
arthur
did,
but
in
the
other
way
I
had
a
huge,
mr
that
has
all
the
context,
and
then
I
start
opening
the
small,
mrs
with
the
feature.
But
what
happens
that
if
one
person
from
on
the
small
mars
decide
to
kind
of
change
the
whole
idea,
then
now
you
have
to
redone
re
cut,
basically
to
do
all
the
other,
mrs
because
the
iteration
you
lost
iteration
over
there,
because
now
it's
another
logic.
B
And
if
you
do
and
then,
if
you
don't
see
me
damned
if
you
do
and
damned,
if
you
don't
yeah
so
yeah,
we
should,
we
should
have
a
chat
about.
If
there's
anything,
we
could
have
done
better
because
it
at
the
end.
B
What
what
I'm
hearing
sam
is
that,
if
you,
if
you
do
the
whole
thing
you
you
might
get
pushed
back
and
say
hey,
this
is
too
big
or
even
if
you
don't,
if
you
don't
get
that
complete
push
back,
it
might
still
affect
the
chain
of
mars
that
you
have
and
if
you
do
too
small,
it
affects
visibility
and
understanding
on
on
where,
where
the
vision
of
where
you're
going
with
that
it's
a
tough
spot,
maybe
something
for
the
for
the
iteration
office
hours
to
discuss.
C
Just
one
thing
for
the
for
the
for
the
demo,
I
think
one
of
the
things
that
got
a
little
bit
harder
was
that
we
had
a
big
change
on
the
gma
v2
related
to
the
what
someone
else
contributed
before
us.
It
was
great,
but
then
we
were
not
expecting
to
cover
all
those
cases
and
then
to
play
out
with
this
interaction,
new
interaction
between
v1
and
v2,
with
this
new
feature
that
we
didn't
test
before
it
was
merged.
Recently,
it
kind
of
added
a
little
bit
of
complexity.
C
A
Yeah
so
my
take
on
all
of
this,
is
you
know
it's
very
understandable?
You
know
I
I
don't
fault
zamir
or
arthur
for
this.
At
the
same
time,
oh
and
it
looks
like
wayne
just
dropped
off,
but
I
find
this
very
unfortunate.
You
know
it
kind
of
raises
two
questions.
One
you
know
is
our
quest
for
an
increased
mr
rate,
actually
hurting
our
efficiency.
You
know
if
the
purpose
of
getting
our
mr
rate
up
is
to
boost
efficiency.
You
know,
are
we
actually
missing
the
mark
there
and
then
the
other
thing?
A
A
You
know-
and
I
understand
that
there
are
good
reasons-
to
keep
your
code
clean
and
free
of
technical
debt
and
follow
good
design
patterns,
but
you
know,
could
we
have
come
up
with
a
solution
that
still
allowed
us
to
deliver
on
time
and
perhaps
go
back
and
clean
up
some
of
that
tech
debt,
even
in
the
following?
Following
iteration
anyway,
so
you
know
just
wanted
to
share
my
thoughts
on
all
of
this.
Is
you
know
again
understandable,
but
it's
very,
it
seems
unfortunate
here
as
a
company.
B
I
I
I
can't
agree
with
that.
Sam
we've,
it's
been
the
the
question
around
this
mmr
rate,
I'm
still
getting
used
to
it.
B
There's
there's
value,
and
I
know
that
that
our
ceo
uses
that,
as
a
as
a
to
to
promote
how
well
the
company
does
in
being
able
to
get
things
out
quickly,
but
I've
I've
raised
in
the
past
that
that
this
is
now
because,
because
the
metric
has
been
kind
of
become
a
goal
we
might
be,
it
might
be
a
self-fulfilling
prophecy
and
we
might
actually
be
hurting
more
than
than
it's
being
useful.