►
From YouTube: Defend - Kubernetes WAF Enablement MVC demo
Description
Quick demo on progress for enabling Web Application Firewall for Kubernetes
Issue: https://gitlab.com/gitlab-org/gitlab-ce/issues/65192
A
Cool
okay,
welcome
everyone
to
a
quick
demo
of
enabling
a
web
application
firewall
using
Ajax
ingress
on
your
gitlab
kubernetes
cluster.
So
right
today,
we'll
be
demonstrating
a
kind
of
the
first
iteration
of
this,
which
is
currently
in
an
open,
merge
request
and
it's
not
super
flexible
and
so
primarily
I
just
want
to
show
what
we've
got
here
so
get
some
feedback
and
decide
if
we
want
to
ship
what
we've
got
or
keep
iterating
on
this
a
little
bit
so
before
I
get
started,
are
there
any
questions
cool
all
right?
A
A
A
A
Cool
so
we're
going
to
initialize
the
project
we're
using
Auto
DevOps
in
this
case,
so
it's
gonna
kick
off
and
Auto
DevOps
is
enabled
by
default,
but
that
means
that
it's
enabled
to
execute
no
pipeline
will
initially
start
it's
kind
of
like
an
experience
baseline
here.
So
this
will
be
fun.
I'm
also
going
to
speed
things
along
in
our
pipeline.
So
I'm
going
to
open
up
the
CICE
configuration
for
my
project
and
to
find
some
variables
to
disable
some
jobs.
C
A
Right
here
is
just
part
of
the
demo,
because
it
takes
too
long
to
run
a
pipeline
with
all
these
jobs
enabled
so
I.
Don't
I
don't
need
any
of
these
things.
I
need
a
container
skating,
pensee
skating,
license
management
performance
testing
code
quality
and
by
defining
these
here,
I
could
also
define
these
in
a
pipeline
run,
but
when
I
run
the
pipeline
now,
none
of
these
jobs
will
execute
I
left
sassed
enabled
because
it
takes
about
50
seconds
with
this
default
one,
and
you
can
see
at
least
one
of
them
run.
A
Call
it
test
we're
going
to
use
this
project
zone
doesn't
really
matter.
I'm.
Just
gonna
choose
the
one
closest
to
me,
so
things
are
ideally
faster
and
get
live.
A
tiff
cluster
is
an
important
one
here.
This
means
that
we'll
be
able
to
install
applications.
If
we
do
not
do
this,
then
you
need
to
install
things
like
helm
or
chiller
yourself.
A
Okay,
so
it's
going
to
start
cleaning
this
cluster,
as
you
can
see
here,
cluster
provision,
worker
background
job
kicked
off
and
I
believe
this
part
takes
about
two
and
a
half
minutes.
So
we'll
keep
an
eye
on
this
page
and
I'm
going
to
go
ahead
and
open
up
sidekick.
Just
so
I
can
see
when
this
job
starts
running
now,.
A
We
could
also
just
refresh
our
cluster
page
over
here
until
l
charles
test
shows
up,
but
no
big
deal
there.
You,
okay,
so
next
steps
here.
This
currently
is
all
standard
process
for
creating
a
cluster
for
your
project.
Once
this
finishes
we're
going
to
install
helm
tiller,
so
we
can
store
our
applications
on
the
cluster.
We're
going
to
install
a
runner
and
the
cool
thing
about
Auto
DevOps
here
is
that
you
can
actually
install
your
runner
directly
on
your
cluster.
A
A
A
A
A
D
A
A
A
Now
the
modsecurity
firewall
is
currently
a
it's
a
supported,
plugin
of
the
ingress
controller
here,
so
so
internally.
What
we're
doing,
when
we're
able
in
this
we'll,
be
setting
a
configuration
on
this
application
itself,
so
the
current
way
that
this
works
is
that
we've
annotated
the
helm
chart
that
runs
ingress
here
to
install
it
with
this
additional
study
and
what
it
does.
Is
it
basically
copies
them
on
security
module
into
the
engine
X
configuration
now
we
need
our.
A
A
Believe
it
does
because
there
those
two
are
pretty
separate.
There's
things
like
like
Jupiter
here
Jupiter
cannot
be
installed
until
you
install
ingress.
First,
let
these
Tribune
installs
every
I'm,
also
get
installed
prometheus,
isn't
really
necessary
for
the
demo,
but
it'll
give
us
a
pretty
charge.
So
we
can
see
that
and
as
soon
as
ingress
finishes
here,
we
need
a
base
domain
here
for
our
Auto
deploy
work.
So
we're
going
to
copy
the
endpoint
generated
by
ingress
into
our
base
domain,
and
then
we
can
fire
off
our
CI
pipeline
job.
A
A
So
are
we
installed
an
application
here
for
helm
at
the
end,
helm
that
went
okay
begin
install
installed,
the
runner
installed,
ingress
installed,
Prometheus
and
there's
our
interest
in
point,
so
we're
gonna
copy
that
endpoint
click
here
and
we're
gonna
use.
Nif
I/o
for
those
unfamiliar
nip
I/o
is
basically
like
a
it's
a
public
endpoint
that
someone
made
that
just
provides
a
wild
card.
Ssl
cert
you're
on
two
different
IP
addresses,
so
it
can
just
be
a
convenient
way
to
do.
Http
over
random
I
peace.
A
D
A
A
A
A
Now
the
initial
implementation
of
this
to
at
least
with
the
merger
request,
that's
currently
out
wraps
the
additional
changes
to
the
ingress
helm
chart
in
a
feature
flag.
So
currently
we
have
this
on
a
featured
leg
in
the
merger
quest,
and
that
means
that
it's
currently
off
by
default,
but
a
user
could
either
enable
or
disable
that
feature
flag
to
enable
the
Z
deployment
of
ingress
with
mod
security
enabled
that
said,
it's
a
feature
flag
around
the
deployment
of
that
chart,
not
the
actual
feature.
A
So
if
they
slip
the
feature
leg
after
they
deployed,
then
it's
not
really
gonna
do
anything
unless
they
uninstall
the
app
and
reinstall
it
right
and
this
production
job
right
here
when
I
mentioned
before
you
need
to
set
the
base
domain
for
the
cluster.
If
I
had
not
done
that,
then
this
production
job
here
would
fail,
because
there's
no
base
domain
set
build
basically
just
runs
docker
build
on
the
container.
A
If
there's
a
docker
file
present,
so
it
auto
built
the
app
again
on
our
DevOps
is
pretty
cool
cuz
I,
don't
have
to
do
any
of
that.
Sas
is
gonna,
run
like
50
seconds
or
so
I
think
test
is
like
a
minute
long
and
then
production
should
be
pretty
quick.
So
we'll
keep
an
eye
on
that,
while
we're
doing
that
I'll
pop
over
here
get
my
cute
control.
Cement
command
set
up
I
have
g-cloud
tools
installed,
so
I'm
gonna
run
a
quick
g-cloud
test
here,
which
is.
A
A
Now
because
the
app
isn't
deployed
yet
we
only
see
two
kubernetes
namespaces
yell
and
manage
apps,
which
is
a
namespace
we
use
for
installing
those
apps
we
saw
before
C
killer
here.
Runner,
Prometheus
and
ingress
cube
system
is
a
Nettie's
namespace,
that's
divine,
that's
specifically
defined
for
CUDA
and
internal
services.
So
we
don't
have
to
worry
about
any
of
those
things
like
DNS
proxy
is
in
fluent
now.
Once
this
production
task
runs
here,
we
will
see
a
third
namespace
for
our
actual
application.
A
A
A
Okay,
cool,
so
production
no
deployments
yet,
but
when
this
finishes,
then
it
will
give
us
the
end
point
for
our
application,
cool,
so
I've
only
finished
there.
It
is
I
believe
that
we'll
have
one
yep
one
instance
by
default
here
and
if
I
click
this
you
mate,
when
I
Bo
say
on
the
bottom
of
the
screen,
this
will
open
the
pod
logs,
so
I'm
gonna
go
ahead
and
open
those
in
another
tab
and
button
right
here
opens
our
light
environment.
So
you
can
see
that
our
application
is
deployed.
A
A
And
here's
our
pod
logs,
so
as
you
can
see,
this
will
go
ahead
and
refresh
as
I'm
making.
These
here
requests
see
more
here,
so
the
original
idea
was
put
pod
logs
here,
they're
still
ticket
for
that.
But
currently
this
is
just
the
it's
tailing.
The
engine,
x-axis
log
I,
think
yeah
cool
okay.
So
let's
go
and
run
this
again
and
then
we're
going
to
see
the
new
namespace
there
we
go
so
I'm
running
coupe
CTL
get
pollinated
spaces.
This
is
currently
the
two
pause
that
were
created
by
our
deployments.
A
A
A
So
I'm
going
to
send
a
request
with
a
little
sequel
injection
here
in
the
params
and
again
passive
mode
means
it
will
ignore
this,
but,
as
you
can
see,
we
just
got
some
noise
out
of
our
log,
so
the
rather
well
I've
been
saying
esoteric
logging
format.
They
use
here
uses
these
roofings
here
ABCDEF,
which
I'm
going
to
open
this
up
to
actually
figure
out
what
these
sections
are.
So
our
request
bodies
in
C
H
is
our
audit
log,
trailer,
etc.
So
each
one
of
these
sections
is
broken
out
for
analysis
here.
A
E
A
A
D
Couple
questions,
but
first
I
mean
this
is
this
is
great
Lucas.
Thank
you
for
going
through
the
demo.
This
really
helps
you
know,
make
it
real.
What
its
gonna
look
like.
What
the
experience
is
gonna
look
like
so
is
the
only
way
that
a
user
would
be
able
to
view
those
logs
be
to
essentially
use
g-cloud
to
get
credentials
then
use
cube
control
to
tail
that
file
me
anyway,
then
yeah.
A
Console
to
logon
us
there,
it
is
cloud
shell,
that's
that's.
What's
called
so
you
start
cloud
shell,
you
feeling
that
way.
But
yes,
that's
that's
pretty
much
it.
You
have
to
open
a
terminal
to
your
ingress
controller
and
tail.
This
file,
there's
a
couple
ways
to
modify
that,
but
not
with
our
current
ingress
charts.
A
A
Last
month,
security
RS,
which
enables
the
default
rule,
said
there
there's
like
a
third
option,
that's
something
like
mod
security
snippets
and
then
you
can
do
whatever
you
want
in
here
and
write
some
Lua
butt
or
nginx
syntax,
but
it's
not
currently
supported
upstream
by
what
we're,
depending
on
and
I,
don't
know
if
we
want
to
fully
fork
anything
to
get
there.
So
currently,
what
I
have
is
I
have
I'm
wrapping
the
gamal
parsing
inside
the
actual
model,
sticking
behind
a
feature
flag,
and
then
it
adds
these
specific
values.
A
So
again,
this
could
ship
12:3
if
it
wants,
it,
doesn't
get
very
far.
I've
been
looking
this
a
while,
so
I
I'm,
very
curious.
Your
opinions
from
everyone
on
the
call
is:
is
this
like
really
a
pain
to
have
to
like
tail
this
log
off
like
off
the
controller,
or
is
this
standard
operating
procedure
for
checking
security
risks
on
your
kubernetes
instance?
A
D
So
I
mean
we're,
definitely
not
done
here
with
the
state
that
the
demo
is
currently
in
cuz
yeah,
like
you
said
this,
is
it's
gonna,
be
some
extra
steps
to
go
through
until
that
log
file,
I
think
before
we
ever
get
to
a
state
where
we
would
consider
it.
You
know
viable
maturity.
That
type
of
thing
we're
gonna
need
to
do
some
work
to
make
this
easier.
That
said,
I
mean
for
our
very
first
iteration
of
laughs
that
none
of
our
users
have
ever
seen
before.
I
think
this
is
a
great
first
step.
D
A
Okay,
cool
any
other
comments
before
I
yeah.
A
Cannot
securely,
okay,
the
and
I
guess
that
that's
what's
talking
about
here
is
the
there's.
Basically,
two
options
that
are
current
fingers
controller
chart
supports,
and
that
is
the
these
two
settings.
A
It's
either
enable
mod
security
here
or
enable
OS
modsecurity
crs,
which
just
turns
the
rules
on
it
in
passive
mode.
The
idea
of
adding
your
own
rules.
We
either
need
to
fork
this
chart
or
there's
an
open,
mr
for
adding
a
bit
more
support
here,
but
it's
it's
been
open
since
March
and
it
hasn't
been
merged.
Yet
so
I
don't
know
when
that's
gonna
happen,
but
once
I
gets
merged,
then
we
can
update
our
chart
and
ship
it
out
with
whenever
the
next
releases.
For
that.
A
D
A
It's
kind
of
good
to
get
a
idea
of
what
the
structure
of
this
project
kind
of
looks
like.
So
we
could.
We
currently
keep
a
vendor
directory
within
our
code
base
here
of
charts,
and
so
we
have
a
Prometheus
Chara
Jupiter
chart
the
runner
charts
and,
in
a
few
cases,
they're
customized.
So
we'll
take
a
look
at
the
runner
charts
and
one
depends
on.
A
Sexy
defined
in
the
model,
but
it
depends
on
the
upstream
chart,
so
there's
actually
runner
charts
or
charts
runner
charts
yeah.
We
we
actually
maintain
a
chart
for
this
separately,
though
the
vendor
directory
has
this
specific
ingress
chart,
which
is
the
one
that
I
have
pulled
up
there,
and
this
is
loaded
via
the
controller
in
our
applications.
A
So
this
basically
just
opens
the
yamo
file
and
in
my
case
it
does
this
on
on
the
left.
Here
it
is
on
the
configuration
file,
and
this
is
just
a
hide
behind
the
feature
flag
because
business
logic
and
llamo
sucks.
So
that's
our
approach
really
like
modeling,
this
architecture
wise.
We
have
our
applications
defined
within
our
cluster
models
here
and
we
can
define
a
separate
application
that
way.