►
From YouTube: Defend Planning Breakdown Runtime Application Security : Application Infrastructure Security Group
Description
Defend engineers working with PM to breakdown upcoming issues into components, clarify requirements, and identify work boundaries.
A
To
the
Container,
not
security,
we've
been
calling
it.
The
planning
breakdown,
meeting
we've
been
discussing
and
expanding
this
and
really
just
referring
to
it.
As
the
big
group
meeting
I
believe
that
the
majority
of
the
agenda
will
be
around
the
plenty
breakdown
discussion,
at
least
in
this
phase,
we're
at,
but
this
could
also
be
useful
for
for
sharing.
You
know:
high-level
thoughts,
we're
still
kind
of
working
this
out,
but
we're
trying
to
keep
meetings
to
a
minimum
and
be
very
efficient
about
them.
A
So
that's
the
idea
there
for
now
we're
gonna
focus
on
our
pre-existing
agenda,
which
is
around
planning
break
down.
So
does
reminder
we're
not
trying
to
groom
these
issues,
we're
just
trying
to
answer
these
main
questions
of
are
the
requirements
clear
enough?
Do
we
know
the
boundaries
and
is
the
research
and
solution
validation
complete?
Once
that's
done,
these
issues
will
be
assigned
off
to
individuals
for
grooming
and
waiting
and
moved
into
a
scheduling
phase
based
on
the
size
and
priority
of
the
PM's.
So.
A
B
So
yeah
I
don't
have
any
expectation
that
we'll
get
through
the
whole
list,
but
our
figures
would
start
at
the
top
and
see
how
far
we
got.
So,
let
me
go
ahead
and
share
my
screen
here,
just
so
that
we
can
all
follow
along.
So
I
took
this
toggle,
which
we've
discussed
for
a
few
weeks.
Now
we
wanted
to
break
it
down
into
four
smaller
issues
which
we
want
to
have.
I
went.
B
Did
to
make
it
a
little
bit
smaller
so
now
we
have
an
issue
per
global
on
a
global
long
and
blocking
environmental
level
on/off
and
environmental
logging
blocking.
We
do
have
mocks
for
all
of
these.
At
this
point,
as
well
as
requirements,
I,
don't
know
if
I'm
gonna
go
through
these
I.
Imagine
this
one
will
be
pretty
short
for
each
of
these,
but
I
mostly
want
to
just
check
if
we
have
any
questions,
starting
with
the
global
on
a
toggle,
this
one
I
think
we're
almost
done
given
the
years
merge
request.
B
I
wanted
to
keep
this
one
open
just
because
now
that
we
have
mocks
the
UI
here
is
slightly
different
from
how
it
was
implemented
in
his
merge
request.
So
I
want
to
make
sure
that
we
don't
close
this
out
until
we're
done
done.
Where
we've
got,
you
know
the
UI
in
alignment
with
the
mocks
as
well.
I,
don't
see
any
reason
to
wait
on
merging
the
current.
C
A
C
Is
essentially
like
where
what
we
have
right
now
in
terms
of
like
the
environment
variable
is
like
the
rule
engine,
but
we
don't
really
have
the
audit
engine
independent
on
global.
Maybe
that's
like
a
implementation
detail,
or
so
that
gets
really
specific,
but
I
kind
of
soliciting
feedback
there,
and
if
that's
something,
we
need
to
discuss
more
yeah.
B
So
I
can
clarify
that
a
little
bit
I
don't
envision
a
state
where
customers
will
want
it
blocking,
but
not
logging.
So
really
there
are
three
states
either
it's
off,
meaning
no
logging
or
blocking
or
you
have
the
logging
turned
on,
but
the
block
turn
off
or
you
have
them
both
turned
on.
So
this
one
is
really
just
about
enabling
and
disabling
it
entirely.
So
this
is
how
you
would
shut
the
whole
thing
off
and
then
this
story
separately
allows
you
to
switch
modes
between
and
the
lock
switch
modes
between
logging
and
blocking.
B
So
once
it's
enable
you
would
get
a
drop
down
here
and
you
can
put
it
either
in
logging
or
blocking
which
would
turn
the
rules
engine
off,
but
leave
logging
on
or
well.
Logging
is
on
in
either
case
in
this
state,
so
it
would
either
just
turn
the
rules
engine
author
on,
but
long
would
always
be
on,
and
then
this
toggle
would
either
shut
everything
off
or
turn
everything
back
on.
C
Yeah
I
think
so
and
I
think
that
makes
sense,
I,
I
guess
I
would
say.
The
way
that
we
had
this
set
up
before
would
be
turn
turning
logging
off
in
the
sense
of
believing
the
issue
we
had
previously
was
actually
just
disabling
the
rule
engine.
But
if
we
want
to
like
fully
disabled,
then
cool
that
make
sense
to
me.
Yeah.
B
This
is
maybe
just
a
little
bit
of
a
stronger
disable
where
you're
actually
shutting
it
off
entirely,
and
if
you
wanted
to
leave
it
installed
and
running,
just
not
enforcing
anything
and
walking.
Only
then
you
would
have
it
enabled
and
switch
this
to
logging
mode,
and
so
that's
where
we
have
this
broken
down
into
two
stories.
The
first
one
is
just
the
on/off
which
again
I
think
the
moves
done
and
there's
probably
like
90%
done
with
that
work
short
of
just
making
the
UX.
B
So
those
are
those
two
issues:
the
next
two
that
we
broke
this
down
into
you
deal
with
exceptions
at
the
environment
level.
So
this
is
a
global
setting
and
then,
as
you
drill
into
the
operations
in
the
fire
Paige-
and
he
actually
has
a
small
but
tiny
walk
through
there.
Let's
see
you
can
now
have
a
protection
button
up
top
which
will
open
up
this
Environment
Protection
drawer,
and
this
section
we
broke
into
two
as
well.
B
So
the
first
issue
would
be
everything
above
this
line,
essentially
right
here,
where
you
can
turn
it
on
or
off
at
an
environmental
level,
it
would
have
to
be
turned
on
globally
to
be
able
to
turn
it
on
there
off
an
environmental
level.
But
that
way
you
could
have
it
on
globally,
but
disabled
at
just
for
one
specific
environment.
B
The
use
case
here
or
an
example
would
be
you
know,
say:
I
want
to
test
this
thing
out,
but
I
don't
want
it
on
in
production,
yet
you
would
turn
it
on
globally
and
then
you
would
turn
it
off
for
your
production
environment
and
then
the
final
issue
we
have
is
adding
the
ability
to
differentiate
between
logging
and
blocking
modes
on
a
per
environment
basis.
So
here
again
you
can
either
revert
to
the
global
default
or
you
could
apply.
You
know
blocking
or
logging
separately,
where
this
would
override
the
global
setting.
That
was
that
earlier.
B
B
So
in
my
requirement,
I
intentionally
did
not
describe
how
that
change
get
implement,
gets
implemented.
I
like
what
you
did
with
the
helm,
chart
pushing
that
out,
but
you
know
I
I
don't
want
to
get
too
prescriptive
in
the
solution
here.
I
would
expect
that
when
they
made
the
change
and
hit
save
changes
that
are
actively
pushed
that
new
setting
out
to
their
environment
so
again,
I
don't
want
to
get
too
I,
don't
want
to
force
a
solution
on
engineering.
B
C
Sue
me
again
missing
this,
but
I
feel
like
the
I'm,
not
really
quite
processing
the
difference
between
these
two
settings,
enabling
the
laughs
it
needs
to
have
kind
of
a
any
sense.
Obviously,
the
response
mode
mentioned
here
I
think
it
would
make
more
sense
for
it
to
need
like
a
radio
button
with
four
values
which
would
be
like
off,
locking
longing
or
global
is
when
you
set
it
to
on
I'm,
just
trying
to
imagine
how
these
two
would
interact
together.
B
You're
saying
this
page
and
you're
asking
how
these
two
pages
would
interact
together.
This
page
no.
C
I
would
wonder
if
you
need
to
if
you're
flipping
it
on
then
you're
implicitly
studying
it
in
a
given
state.
So
is
it
implicitly
on
by
default,
if
you're
talking
in
this
neuron,
at
which
point
you
have
kind
of
a
difference
of
stay
between
the
response
setting,
that
appears
and
the
toggle
itself
they're
kind
of
operating
on
the
same
thing,
but
I
feel
like
there's
like
an
implicit
decision,
that's
being
made
for
customers.
B
Okay,
so
I
think
the
initial
state
is
going
to
go
back
to
this
global
setting.
So
if
this
global
setting
is
enabled,
then
by
default,
this
would
also
be
enabled,
and
if
the
global
setting
is
disabled,
then
it
would
be
disabled
and
you'd
not
be
able
to
enable
it
because
it's
disabled
cluster
wide
so
really
on
the
environment
pages.
It's
going
be
pointing
back
to
this
global
setting
for
its
initial
or
default
state,
whether
it's
on
or
off.
C
Think
it
does
I
need
to
think
about
this
more,
but
yeah
may
occur.
Z.
B
And
by
default
the
global
default
would
be
selected.
You
know
this
is
an
exception
basis,
so
we're
not
going
to
pre-select
blocking
or
logging
down
here
for
them.
You
know
the
default
scenario
when
you
come
onto
this
page,
assuming
it's
enabled
on
the
global
settings
would
be
enabled
and
because
of
the
global
default.
D
B
Okay,
yeah.
That
sounds
good
all
mark
all
of
these
ready
for
grooming
and
pull
there
forward
from
there.
So
the
next
one
would
be
the
wax
and
the
cluster
Network
policy
logs
the
psyllium
logs.
It
can
more
or
less
discuss
these
together
because
they're.
So
it's
going
to
be
very
similar.
I
put
a
couple
notes
there
in
the
agenda,
as
well
as
the
issues,
but
after
some
consideration
we
decided
to
pivot
here
in
our
proposed
solution,
and
rather
than
displaying
the
logs
in
get
lab
proper.
B
We
are
looking
at
just
exporting
these
logs
out
to
SM
or
as
Wang
noted,
or
a
central
logging
solution,
so
not
that
we
don't
want
to
display
the
logs
in
get
lab.
We
do,
but
we
also
arguing
to
need
to
send
them
out
tests
in
anyway
and
sending
them
out
to
ascend
as
much
is
likely
much
easier
for
us.
They
don't
have
any
sort
of
dependency
on
work
from
the
monitor
team
and
it's
just
overall
a
much
shorter,
easier,
smaller
chunk
of
work.
A
E
E
Yeah
it
doesn't,
it
doesn't
close
from
making
this
available
to
customers
inside
gitlab
itself,
so
we're
still
going
to
do
that.
This
is
for
customers
who
already
have
a
similar
Sentra's
logging
who
want
it
there,
who
we
believe
a
large
percentage
of
customers
will
actually
want
this
and
potentially
also
want
to
see
the
logs
and
get
left
when
that's
more
amenable
to
us
implementing
with
native
gitlab
features
there
that
monitor
provides
then
I
think
I
want
to
do
that
as
well.
So
this
is
not.
B
B
E
More
thanks.
We
had
a
good
discussion
via
issued
comments
this
morning,
so
the
easier
one
is
probably
the
the
format
of
the
data
so
and
most
sims.
Most
logging
solutions
will
take
anything
as
input
right.
That's
what
they're
and
most
sims
well
as
well
there
to
being
able
to
receive
data
from
vendors,
but
they
have
no
control
over
the
formats.
They
become
they're,
generally,
very,
very
flexible
on.
Are
they
gonna?
Take
it
and
you
know
delimited
format
and
they're
gonna.
E
E
Like
thousands
and
thousands
of
sources
the
so,
if
the
comment
there
is,
you
know
format
of
these
various
formats.
I
think
this
is
gonna,
create
a
little
extra
work
for
us
that
is
unnecessary
to
support
various
formats.
I
think
we
should
support
the
native
format
that
modsecurity
already
logs
in
and
initially
most
sims
and
log
solutions
will
be
able
to
handle
that
already
and
in
the
future,
offer
multiple
formats
as
it
should
be.
E
If
we
have
to
do
some
transformation
logs,
what
you're
even
configuration
of
them,
that's
the
texture
work
to
either
make
it
configurable,
especially
in
a
UI
or
if
we
have
to
put
extra
code
in
which
would
not
be
a
bad
thing
to
transform
the
data.
We've
got
to
write
that
and
I
think
that
the
standard
format
or
modsecurity
is
likely
good
enough
for
MVC.
B
B
It
sounds
like
there's
a
high
probability
that
it's
going
to
work,
but
it'd
be
nice
to
just
confirm
that
format
so
that
we
can
actually
reflect
that
up
in
the
proposal
and
say
you
know,
this
is
the
format
that
we'll
be
sending
it
in
which
also
happens
to
be
the
default
so
I'm
generally
an
agreement
wiki
waiting
on
that
one
I
just
wanted
to
kind
of
close
the
loop
on
what
exactly
that.
Look
like
the.
E
Last
bullet
on
the
next
comment:
I,
don't
know
if
I
got
the
right
link.
I
did
a
bunch
of
research
today
where
it
says
mod
security,
log
format
a
little
bit
a
little
bit
of
fire
I,
don't
know
if
that's
the
right
link
it's
towards
the
top
of
your
page
before
you
scroll
it,
but
maybe
the
last
last
look
laughs.
E
C
Oh
there,
that
is
the
default.
It's
this,
like
kind
of
what
I've
been
calling
esoteric
way
of
breaking
up
this
log
data
into
like
a
a
through
Z
sections.
We
don't
use
this
one.
What
we
actually
do
is
we
use
JSON
export,
and
so
it's
just
a
new
line.
Delimited
JSON
objects,
which
is
totally
undocumented
in
Mon
Security,
because
they're
dogs
just
are
horrible,
but
regardless
we
have
that
and
that's
what
we
do
currently
the
the
way
we
actually
do
this
for
the
elasticsearch
integration.
C
Is
we
create
a
logging
side
card
that
just
tails
off
the
entire
modsecurity
json
log
and
exposes
that
through
file
beats
and
then
elasticsearch
just
slurps
it
up?
What
we
could
consider
here
is
if
we
want
to
move
that
logging
side
car
into
the
ingress
deployment
itself,
so
that
that
logging
sidecar
is
always
present,
then
all
we
have
to
do
is
say
slurp
the
log
straight
off
this
like
these,
this
pod
logs
and
then
at
least
it's
similar
to
the
way
that
we're
doing
with
the
Alaska
search
integration
for
like
Wow
statistics.
C
The
reason
I
want
to
raise
that
is
because
I'm
I'm
getting
concerned.
If
we
provide
two
flexible
format,
we're
not
providing
a
good
way
for
customers
to
eventually
transition
to
supporting
something
like.
C
C
B
Think
so
so
I
guess
what
I'm
looking
for
at
the
end
of
the
day
is,
if
you
guys
could
just
give
me
an
example,
you
know
or
the
format
that
you
would
propose
here
and
put
that
in
the
issue.
Then
you
know
or
just
double
check.
So
that's
gonna
work
with
us
in
which
it
sounds
like
it
probably
will,
and
then
I
can
update
the
description
to
to
more
accurately
document
what
that
format
looks
like
that
way.
Anybody
who's
pumping
it
into
their
sim
does
what
format
to
expect.
E
So
the
other
is
so
taking
a
step
back
just
to
the
context
perk
for
everybody,
so
watch
the
clock.
So
so
there
there's
central,
so
customers
are
generally
going
to
be.
You
know
using
Auto
DevOps
in
the
in
their
Amazon
instances
and
or
Google
and
or
Azure
and
or
their
own
instances
of
kubernetes
right
so
at
least
at
Amazon.
There's
a
centralized
logging
solution
called
cloud
watch
and
there's
similar
things
for
Google
and
as
your
Google,
it's
called
stack,
drive,
I
believe
in
Azure.
E
It's
called,
monitor
or
log
monitor,
and
sometimes
people
will
send
logs
to
those.
So
they
have
a
centralized
place
for
logging,
so
they
can
see
everything
happening
in
their
environment,
no
matter
what
it
is:
security
or
otherwise,
and
then
they'll
send
a
subset
or
potentially
all
of
those
logs
to
a
sim
bender
like
Splunk
or
IBM
or
logarithm
I
got
those
names
from
the
Gartner
Magic
Quadrant.
The
top
three
farthest
right,
sometimes
they'll
send
it
directly
to
those
sim
benders.
E
E
Some
examples
for
Splunk
I
didn't
look
at
IBM
or
logarithm
is
to
use
fluid
B
as
a
daemon
set,
so
it
runs
on
every
node
and
then
every
contain
that
node
consent
to
that
to
that
demon
set
that
container
that
was
configured
to
run
on
every
node,
and
then
it
has
all
fluid
D
has
all
sorts
of
plugins
for
these
various
things
like
cloud
watch
like
stack
drive
like
monitor
as
your
monitor,
and
that
might
be
might
be.
Keyword
might
be
a
good
way
to
do
this.
E
It
may
not
be
because
if
the
sim
is
running
inside
kubernetes
as
well
or
the
log
collector
for
the
sim,
that's
is
just
thing
hanging
off
the
sin
that
is,
they
used
to
receive
logs.
If
in
kubernetes
the
IP
address
is
going
to
be
a
serial
unchanged
right
or
ephemeral,
sorry,
ephemeral
and
change,
but
maybe
it's
not
right-
maybe
that
maybe
they're
doing
in
a
different
way.
Maybe
it's
not
running
in
kubernetes,
so
it's
less
likely
to
change,
etc.
So
I'm
not
saying
fluidy
is
the
answer.
B
So
I
mean
I,
guess
my
interest
in
this
is
just
to
keep.
You
know,
keep
this
truly
scoped
at
minimal.
You
know,
so
we
can
get
something
out
the
door
and
then
iterate
on
it.
It
seems
to
me
like
fluent
D.
You
know
basically
adds
the
main
advantage
of
being
able
to
send
to
a
kubernetes
pod
inside
of
the
cluster.
You
know
where
that
IP
address
is
likely
to
change
and
it
seems
like
that's,
definitely
a
subset
and
scope
that
perhaps
we
could
cut
from
this
initial
MVC,
but
that's
my
initial
take
on
it.
F
Just
that's
my
connection
lasted
the
10
seconds,
but
on
fluently
this
is
pretty
much.
What
Lucas
was
explaining
with
our
bit,
and
it
is
a
cherry
on
D-
is
kind
of
fight
bit.
It's
a
way
to
aggregate
the
logs
and
string
that
also
makes
noise
source.
So
we
should
stick
to
five
bit
since
we
already
have
some
more
gone
in
this
area.
We
could
reuse.
D
E
Definitely
want
to
reinvent
the
wheel,
I
just
looked
at
what
like
Amazon
and
Google
the
first
thing
you:
how
do
you
get
logs
to
it
from
kubernetes?
The
first
thing
they
both
mentioned
is
fluent
B.
That's
how
I
just
came
up
with
fluid
D.
That
doesn't
mean
it's
the
best
solution
for
us.
It's
just
the
first
one
that
I
found
in
common
between
the
two.
So
if
we
send
it
to
file
beat
and
then
file
beat,
can
send
the
data
to
other
things,
then
how
would
we
so?
E
Would
we
then
provide
the
customer
to
say
with
a
configuration
or
tell
them
we're
in
the
configuration
file
beat?
Is
there
here's
where
you
forward
the
law?
Here's
what
you
forward
the
logs
do,
and
you
know
a
file
beat
connection
to
CloudWatch
file
beat
connection
to
you
know:
google
stackdriver
a
file
beat
connection
to
Splunk
or
whatever
it
is.
Is
that
what
we
do
is
give
them
pointers
on
how
to
configure
file
beat
to
send
that
data
to
those
places.
F
C
We
could
do
that,
but
I
honestly
think
it's
all
we
really
have
to
do
is
say:
here's
where
here's,
where
the
logs
live
in
our
default
deployment
and
if
they
want
to
use
file
beat
or
they
want
to
use
flu
and
D
than
they
can.
We
would
probably
want
to
push
them
towards
file
B,
because
that's
what
we
used
elsewhere
elsewhere
and
I
think
that's
I'm
concerned
about
the
migration
path.
C
If
we
make
this
to
custom
and
then
we'll
never
be
able
to
our
customers
back
onto
or
recommend
a
configuration
for
things
like
supporting
logs
or
statistics
within
the
UI,
and
so
that's
why
I
think
that
file
B
would
be
preferable
there.
But
in
the
end,
it's
just
here's
where
the
log
file
is
we
stream
this
out?
So
you
can
just
listen
to
this
stream
or
you
can
just
read
it
yourself
with
like
a
multi
tail.
So.
E
I'm
all
good
with
not
going
down
the
flinty
route,
because
we've
already
we're
doing
stuff
with
file
beet
and
it's
a
it's
basically
an
equivalent
solution,
and
it's
working
well
for
us
so
far,
because
perhaps
instead
of
making
it
so
we
code
it
in
on
configuring
file
beat
for
this
or
configuring
file
beat,
for
that
is
we
document?
How
do
it
say
if
you
want
to
if
you
want
to
use
file,
beat
with
cloud
watch?
E
Here's
how
you
do
it
if
you
want
to
use
file
beat
with
you
know
the
equivalent
in
Google
the
equivalent
in
Shirdi.
You
know
with
Splunk
with
you
know,
IBM
or
whatever.
Then
we
tell
them,
we
give
them
a
basic
set
of
instructions
and
then
point
them
at
the
vendors
website
on
how
to
do
it
further.
Is
that
something
a
good
way
to
go?
B
So
I
know
we're
pretty
much
at
time
here.
I,
don't
want
to
keep
things
too
long,
I
guess
this
is
a
parting
question.
I
would
say
you
know
based
off
of
our
discussion
today.
Is
this
still
the
right
UI
that
we
would
show
and
get
loud
would
be
a
way
for
them
to
configure
an
IP
address,
support
the
protocol
and
format
order
for
perhaps
not
a
4matic.
We
just
standardizing
on
one
yeah.