►
From YouTube: KCD UK | Day #1 Track #2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
C
A
Because
it's
not
possible
to
do
multiple
at
the
same
time
and
or
something,
but
it
doesn't
matter
my
talk's
being
played.
Obviously
it's
pretty
recorded
and
I'm
really
pleased
to
pre-recorded
it,
because
if
I
was
trying
to
do
it
live
right
now,
I
think
I
would
be
a
melting
puddle
on
the
floor.
A
But
if
I'll
be
here
I'll
be
here
to
answer
questions.
B
A
A
A
A
A
A
A
A
In
this
case
we
mean
it
to
say
what
is
the
core
specialism
of
this
application
team
that
they
should
be
focused
on,
instead
of
them
also
trying
to
wire
all
of
the
infrastructure
elements
together,
and
because
this
team
has
reduced
headspace
to
focus
on
the
app.
This
could
have
several
knock-on
implications.
A
A
A
A
And
maybe
there
are
other
hidden
elements
that
your
border
organization
is
not
aware
of.
There
could
be
other
parts
of
the
stack
within
each
team
that
are
somehow
hidden
in
the
murky
world
of
shadow.
I.T
teams
could
be
using
tools
that
don't
actually
comply
with
your
internal
governance
or
compliance
which
could
be
leading
to
security
risks.
A
You
have
duplication
and
waste.
You
have
team
silos
with
no
collaboration.
You've
potentially
got
shadow,
it
lurking
out
there
and
potentially
security
or
compliance
gaps,
which
could
mean
it's
a
nightmare
to
audit
to
figure
out.
What's
going
on
across
your
business
now,
aside
from
my
team's
combined
many
years
of
working
with
customers,
who've
experienced
some
of
these
issues,
is
there
any
other
data
out?
There
there's
actually
a
great
report
from
dynatrace
that
they
ran
this
year?
It's
based
on
a
global
survey
of
700
cios
in
large
enterprises
with
over
1
000
employees
across
multiple
countries.
A
40
of
these
cios
say
that
limited
collaboration
is
disrupting.
It's
ability
to
respond
to
change
and
three-quarters
of
these
cios
say
they
are
fed
up
having
to
piece
together
data
from
multiple
tools
to
assess
the
impact
of
it
investments.
So
you
can
only
imagine
what
kind
of
a
sprawl
of
technology
they
might
be
dealing
with.
A
As
with
all
good
questions,
there
are
no
easy
answers,
but
I've
tried
to
break
this
down
into
two
key
parts.
There's
really
change,
that's
required
at
the
organizational
level
and
then
to
specifically
hone
in
on
the
challenge
of
platform
gap.
There
are
there's
a
structure
and
practices
that
platform
team
needs
to
have.
A
A
The
parts
I'm
going
to
focus
on
are
two
of
the
three
interaction
modes:
first,
one
collaboration
defined
within
team
topologies,
as
teams
working
together
for
a
defined
period
to
discover
new
things
and
the
other
one
is
x
as
a
service
defined,
as
one
team
provides.
One
team
consumes
something
as
a
service,
for
example,
an
api.
A
A
The
second
team
type
that
I'm
focused
on
is
the
platform
team
defined
in
team
topologies
as
a
team
that
works
on
the
underlying
platform,
providing
a
compelling
internal
product
to
accelerate
delivery
by
stream
align
teams.
So
it's
within
these
two
areas
that
I've
focused
in
my
career
heavily
in
the
last
few
years.
A
A
A
A
A
Back
in
october,
2018
evan
butcher
wrote
this
fantastic
definition
of
a
platform.
He
stated.
A
digital
platform
is
a
foundation
of
self-service
apis
tools,
services,
knowledge
and
support,
which
are
arranged
as
a
compelling
internal
product.
Autonomous
delivery
teams
can
make
use
of
the
platform
to
deliver
product
features
at
a
higher
pace,
with
reduced
coordination.
A
A
Now,
to
give
you
some
concrete
examples,
it
can
look
a
bit
like
this,
so
here's
my
platform
team
at
the
bottom
of
my
slide.
The
first
thing
the
platform
team
needs
to
understand
is
who
are
their
customers
now.
If
we
go
back
to
our
original
organization
model,
it
would
look
a
bit
like
this.
We
have
teams
a
and
b
and
c
working
away
on
their
applications
and
those
team
members,
those
developers,
those
designers,
those
product
managers.
Those
teams
are
the
customers
for
the
platform
team.
A
A
With
the
requirements
defined,
the
platform
team
needs
to
start
building
the
platform
that
is
building
the
set
of
services
that
the
customers
need.
Now
again,
the
practices
that
the
platform
team
should
follow
are
the
same
as
the
development
practices
that
other
product
teams
should
follow.
They
should
be
building
in
small
batches.
A
They
should
be
seeking
user
feedback
early
and
often
maybe
through
demos.
They
need
to
keep
checking
that
what
they're
building
is
actually
meeting
the
requirements
of
their
customers
so
as
they
can
minimize
risk
and
not
spend
a
large
amount
of
time
building
something
that
ends
up
not
being
fit
for
purpose
and
not
being
used
by
their
customers.
A
Once
the
platform
has
a
service
ready
to
be
used
by
a
customer,
we
know
that
the
next
step
forward
is
to
move
to
the
x
as
a
service
mode.
Now
this
means
providing
a
method
by
which
the
customer
can
access
the
service
on
demand
without
having
to
have
any
communication
or
intervention
by
the
platform
team.
A
A
The
internal
platform
should
be
the
easiest
path
for
the
teams
to
get
to
production,
and
this
could
mean
several
things
that
the
platform
team
can
do.
For
example,
it
could
mean
evangelizing
the
platform,
maybe
branding
it
and
giving
it
a
cool
name
and
marketing
it
internally
such
that
teams
are
curious
and
are
excited
to
use
it.
A
All
of
these
steps
combined
would
lead
your
internal
platform
to
be
easy
to
use
adopted
by
multiple
teams
with
a
reduced
cognitive
load
and
freed
up
developer
time.
This
platform
itself
would
fit
the
exact
requirements
of
your
business
and,
through
the
ongoing
lightweight
collaboration,
it
would
remain
up-to-date,
secure
and
relevant.
A
A
I
spoke
at
spring
one
platform
conference
in
austin,
and
I
shared
just
a
few
statistics
of
customers
we
had
worked
with,
who
saw
huge
improvements
in
their
flow
of
change.
By
implementing
some
of
these
practices,
for
example,
one
company
had
1500
developers
supported
by
only
four
operators,
so
a
fantastic
economy
of
scope.
A
A
A
A
So
what's
next
at
sentasa,
we're
really
keen
to
continue
this
conversation
with
other
folks
in
the
community,
and
we
would
love
to
collaborate
with
others.
We
have
our
website,
you
can
find
us
at
or
you
can
ping
me
on
twitter
if
you've
got
questions
or
stories
to
share.
We
would
love
to
hear
from
you
we're
also
working
on
a
tool
called
critics,
which
is
a
framework
for
enabling
platform
teams
to
build
a
platform.
A
It
provides
a
contract
between
platform
and
application
teams
to
be
codified.
Now,
it's
very
early.
We
only
released
this
project
last
week
and
it's
in
beta.
So
if
anyone
would
like
to
play
with
it
it's
available
in
github,
we
would
love
to
collaborate
with
you
and
seek
feedback,
and
if
you
have
the
chance
to
provide
there's
any
feedback
on
our
tool
or
on
this
topic
in
general,
then
please
feel
free
to
contact
us
feedback.
Cintasso.Io
or
again.
You
can
message
me
via
twitter.
B
Well,
thank
you
very
much
for
that
talk,
paula
that
was
very
informative.
I'm
just
going
to
go
over
and
look
at
the
slack
channel
see
if
we
have
any
questions
either
or
live
on
our
youtube
channel.
B
I
think
that
is
probably
the
case,
but
if
not,
I'm
sure
you'd
be
able
to
answer
any
questions
on
slack
after
the
fact
absolutely
so
I
think
we
have
a
five
minute
break
before
our
next
talk,
which
I
believe
is
denise.
So
I
will
hopefully
wait
to
see
if
he's
going
to
join
the
zoom
chat.
D
Hi
everyone
welcome
to
this
session
about
advanced
authentication
patterns
at
the
edge.
I
am
the
nigerian
director
of
field
engineering
in
emea
at
solo,
dot
io.
So,
let's
start
with
a
little
bit
of
background.
Obviously
all
of
you
are
aware
that
applications
are
now
like
developed
with
micro
services,
and
I
would
say
that
most
of
you
will
agree
that
these
micro
services
tends
to
now
run
on
cubans
clusters
and
one
of
the
first
questions.
People
are
asking
as
soon
as
they
start
to
deploy
the
application
on
kubernetes
is.
D
How
do
I
expose
my
application
to
the
outside
world
and
the
standard
answer
for
that
in
the
kubernetes
world
is
to
use
an
ingress
controller
like
nginx
or
hproxy,
or
something
like
that
and
it
works
like
it.
You
can
expose
your
service,
you
can
secure
it
with
tls.
You
can
do
some
basic
routing,
but
very
quickly.
We
have
more
and
more
applications
deployed
in
the
cubans
cluster
and
each
team
reinvents
the
wheel
in
terms
of
managing
the
authentication,
for
example.
D
So
you
have
like
one
application
that
wants
to
secure
the
access
with
or2
another
one
want
to
use
like
jot
tokens.
Another
one
want
to
use
like
api
keys
or
even
like
a
mix
of
different
options,
and
they
also
need
some
capabilities
that
you
generally
find
in
a
traditional
api
gateway
that
runs
outside
of
kubernetes,
some
things
like
rate
limiting
web
application,
firewall
and
and
so
on.
D
So
there
are
like
different
challenges
right,
so
each
team
reinvent
the
wheel
like
I
said,
the
implementation
in
fact
is
even
different,
like
one
team
using
java,
another
one
like
he's
doing
node.js,
and
so
they
would
use
like
different
libraries
to
do
the
same
thing
right
and
instead
of
that,
I
think
everyone
would
agree
that
application
teams
should
focus
on
the
business
logic
instead
of
spending
time
on
this
authentication
mechanism
and
also
the
security
team,
doesn't
have
any
visibility
on
what's
configured
for
each
application,
so
it
becomes
quite
difficult
for
them
to
understand
if
there
are
any
potential
security
issues
there
and-
and
as
I
said,
you
know
you-
you
still
need
some
other
security
mechanism
and
you
need
to
implement
them
outside
of
the
the
cubans
cluster.
D
So
so
what
about
like
using
a
kubernetes
native
api
gateway?
Instead
of
that?
So
the
idea
here
is
that
you,
you
would
perform
the
authentication
at
the
gateway
level
and
you
you
would
have
like
different
options
like
or
like
job
tokens,
api
keys
and
so
on,
and
basically
the
api
gateway
performs
the
authentication
and
then
pass
some
information
to
the
backend
services
to
let
them
know
who
is
the
user
that
has
been
authenticated,
for
example,
that
could
be,
like
I
add,
like
a
header
with
the
user
email,
for
example,
or
other
information.
D
Basically
any
information
that
you
get
from
a
claim
that
is,
in
the
token
provided
by
the
end
user
or
the
jot
token
returned
by
the
old
provider,
for
example.
And
what's
nice
as
well,
is
that
these
communities
api
gateways
they
can
generally
do
much
more
than
just
authentication
right,
so
you
can
do
like
rate
limiting
wav,
but
also
like
things
like
transformation
and
so
on,
and
you
will
see
I
I
will
do
like
multiple
demos
to
to
try
to
to
demonstrate
that
in
a
in
a
nice
way.
D
These
api
gateways
can
also
be
used.
They
can
run
inside
the
qrs
cluster,
but
still
be
used
to
expose
applications
running
in
legacy
environment
like
in
vms,
for
example.
They
can
also
be
used
to
expose
modern
services
running
in
functions
like
in
in
london,
for
example,
they
can
discover
these
lambda
functions
and
and
expose
them
to
the
outside
world.
D
D
I
can
have
all
the
yaml
corresponding
to
the
way
I
want
to
deploy
my
application
on
kubernetes
in
a
gita
repository
and
at
the
same
time,
the
yaml
corresponding
about
the
way
I
want
to
secure
my
application,
so
the
configuration
of
the
gateway
in
the
same
repository,
which
is
very
nice-
and
I
will
show
you
some
examples
very
soon-
and
other
mechanism
can
be
enforced,
also
at
the
gateway
level
and-
and
you
have
like
nice
visibility
for
the
security
team
right,
they
can
go
and
look
at
the
gateway,
configuration
and
understand
very
quickly
about
how
the
access
to
each
application
is
is
secured.
D
So
in
that
talk,
I
will
focus
on
our
kubernetes
native
api
gateway,
which
is
called
glue
edge
and
glue
edge
is
based
on
envoy.
I
am
sure
that
most
of
you
are
familiar
with
with
envoy,
and
I
will
speak
about
it
like
in
a
minute,
but
basically
blue
edge
is
like
a
management
plane,
a
control
plane,
sorry
for
envoy
and
android
is
the
data
plane
and
you
configure
everything.
D
D
Then
this
filter
will
not
perform
the
authentication
by
itself,
but
it
will
call
an
external
authentication
server
that
will
perform
the
authentication
and
say
yes
or
no.
Do
I
want
to
accept
or
not
this
call
the
same.
If
it's
accepted
it
can
go
through
a
rate
limiting
filter
that
will
color
write
limit
server
that
will
define
if
the
limit
is
rich
generally,
this
server
and
it's
what
we
do
in
blue
edge
is
using
like
redis
to
persist
information
about
the
request
and
to
be
able
to
determine
if
the
limit
is
is
rich.
D
And
then
there
are
like
many
things
that
can
be
done
directly
in
envoy.
In
the
filter,
without
even
calling
like
an
external
component
and
that's
what
we
we
have
done
like,
we
have
created
like
a
filter
for
interacting
with
lambda
function,
another
one
for
performing
transformation,
another
one
for
revolution
firewall
for
ju,
jot's
authentication.
We
don't
use
the
external
automation
server
for
that.
We
will
perform
that
directly
in
envoy,
so
we
there
is
like
an
open
source
version
of
glue
edge.
D
But
if
you
use
the
open
source
version,
then
you
have
to
build
your
own
external
authentication
server,
your
own
retimiting
server,
and
you
don't
get
all
these
filters.
You
get
some
of
them,
but
you
don't
get
like
the
web
option
firewall
or
jot,
and
basically
everything
that
is
really
related
to
security
is
in
the
enterprise
version
and
that's
what
I
will
use
in
in
the
demo.
D
So
why
envoy?
First
of
all,
it's
coming
from
a
neutral
foundation,
the
cncf
the
same
as
kubernetes.
So
there
is
no
one
single
company
that
drives
their
own
map
of
of
envoy.
It
has
a
you
know,
very
large
community.
It
is
used
at
scale
by
you,
know
many
companies,
it's
really
designed
to
be
dynamically
configured
through
a
control
plane.
That's
also
why
you
see
it
like
very
commonly
used
in
service
mesh.
Most
of
the
service
mesh
technology
are
using
like
envoy.
D
The
most
popular
one
is
to
obviously
is
based
on
it,
which
is
also
a
good
reason
for
adopting
like
a
an
api
gateway
based
on
android,
because
when
you
you
will
adopt
service
mesh
in
your
future,
and
I
think,
like
most
of
you,
will
at
some
point
then
having
the
same
technology
for
the
gateway
and
for
the
the
mesh
allows
you
to
have
like
the
matrix
in
the
same
format,
allows
you
to
to
debug
the
the
issues
the
same
way
and
so
on
right.
D
So
it's
I
think,
it's
very
interesting
to
invest
on
on
a
gateway
based
on
based
on
android.
For
all
these
reasons,
so
when
I
say
cuban
native-
and
I
say
we
can,
you
know-
drive
the
configuration
through
kubernetes
resources.
Basically,
this
is
what
I
mean
right,
so
you
will
define
in
the
virtual
service
custom
resource
which
domain
you
want
to
listen
to
like
here.
I
say
I
have
a
request,
starting
by
slash
app
one.
D
I
want
to
perform
authentication
and
the
authentication
is
defined
in
this
external
object
and
I
want
to
delegate
the
action
to
this
root
table
so
that
a
team
can
be
responsible
for
managing
a
specific
domain.
Then
the
application
team
can
manage
all
the
different
paths
like
they
can
have
different
routes
for
different
micro
services
and
so
on,
and
then
you
can
use
like
an
external
object
to
to
define
the
way
you
want
to
authenticate
the
user.
And
you
see
here.
We
have
a
simple
example
with
over
two.
D
We
will
go
through
that
in
the
demo,
but
you
can
also
chain
together
multiple
steps
in
the
configure,
and
we
will
also
see
that
at
the
at
the
end.
So.
D
Okay,
so
in
this
environment
I
have
a
cubans
cluster,
and
if
I
look
at
my
pods,
I
can
see
that
I
have
this
glue
system
namespace,
where
I
have
different
components.
Some
of
them
are
optional
and
I
won't
go
through
the
full
details.
But
basically,
what
you
have
to
remember
is
that
get
the
gateway
proxy
pod
is
envoy.
D
D
So
what
I'm
going
to
do
here
is
that
I
deployed
a
keyclock
already
and
in
this
server,
and
I
will
configure
it
just
like
with
this
few
commands,
and
you
see
it
will
create,
like
a
user
one
with
the
password
password
and
another
user
tool
with
the
same
password,
but
they
will
have
two
different
email
addresses
right.
You
see
the
first
one
as
an
email
address,
that's
finished
by
solo.I
o,
while
the
second
one
has
an
email,
that's
finished
by
example.com
right.
So
it
looks
like
that.
D
I
have
like
blue
edge
and
that
will
use
keyclock.
You
know
for
the
authentication,
so
I
can
create
like
a
kubernetes
secret,
that
contain
my
keyclock
secret,
and
then
I
create
this
odd
config
that
I
spoke
about
before
so
in
this
odd
config.
You
can
see
that
I
have
like
the
url
of
my
application,
the
url
of
key
clock.
You
know
just
like
basic
information
about
how
to
contact
and
interact
with
with
key
clock,
and
then
I
have
my
virtual
service.
D
You
remember.
I
described
that
just
before
it's
one
of
the
most
important
custom
resource
in
glue
edge
and
you
can
see
here.
I
will
add
this
option
to
it.
So
to
say,
when
I
have
a
request
with
starting
by
slash,
then
I
want
to
send
the
request
to
the
book
info
product
page
and
I
want
to
perform
authentication
using
the
hot
config
that
I
have
like
just
created
before
so
now.
If
I
open
like
chrome,
I
will
see
you
know.
D
I
have
ssl
here
and
I
can
authenticate
with
user
one
in
the
password
password
and
I
have
access
to
my
application.
So
you
see
it's
very
easy.
I
was
able
to
configure
it
it's
a
very
simple
case
right.
I
just
want
to
to
secure
the
access
with
or2,
but
there
will
be
at
the
end.
I
will
show
you
like
something:
a
little
bit
more
advanced
and
even
using
like
authorization
and
so
on.
But
before
that
I
want
to
show
you
also
a
few
other
capabilities
that
we
discussed
before.
D
D
If
I
have
a
header
with,
I
don't
know
ten
antiqual
tenant
one
or
things
like
that
right
and
I
could
have
like
a
lot
of
different
combination
of
rules
to
to
have
like
really
fine
grain
right
limiting.
D
But
here
I
just
show
you
like
a
basic
example:
you
create
the
red
image
config
like
we
created
the
odd
config
before,
and
in
that
case
I
I
reference
the
the
red
limit
config
here,
so
I
know
that
it's
applied
to
this
root
and
you
see
you
can
do
it
here
at
the
root
level
and
that
will
apply
only
to
this
root.
But
I
could
put
this
option
at
the
domain
level
and
then
it
will
apply
to
all
the
roots.
So
we
have
like
a
lot
of
flexibility
here.
D
So
if
I
just
like
refresh
many
times
you
see
after
10
times,
I
get
this
429
response
codes,
meaning
that
I've
been
rate
limited
right.
So
I
can,
you
know,
delete
this
basic
rate
limit
config,
and
here
I
can
update
it
and
you
know
give
also
like
different
options
depending
if
I
am
like
authenticated
or
not
right,
I
would
say
if
I
am
an
anonymous
user,
I
want
to
have
like
just
five
requests
per
minute,
but
if
I
am
authenticated,
then
I
can
have
like
20
requests
per
minute.
D
So,
as
I
said,
you
can
have
granularity
about.
You
know
different
editors
of
combination
or
you
can
have
like
also
this
option
where
you
you
set
like
different
rules
for
different
users
or
different
rules
for
authenticated,
not
authenticated.
You
see
here.
I
am
not
not
authenticated
and
I
get
like
this
rate
limit
after
just
five
requests,
and
another
thing
we
spoke
about
before
is
like
a
web
application
firewall
right.
So
what
we
did
is
that
we
took
mode
security
which
is
very
popular
and
we
put
it
inside
an
envoy
filter.
D
So
that
means
that
now
I
can
update
my
virtual
service
and
I
can
add
any
mod
security
rule
right.
So
here
I
can
say
I
I
don't
want
like
any
payload.
D
You
know
bigger
than
one
byte
right,
so
it's
just
like
for
demo
purpose.
Obviously,
but
you
see
it's
an
example:
you
can
limit
the
size
of
the
payload,
you
can
white
list,
some
ip
addresses
range,
or
you
know
this
kind
of
things
right.
So
now,
if
I
run
the
curl
command-
and
I
just
like-
send
a
body
request
right
with
some
data-
it
will
be
directly
bigger
than
one
byte
right
and
you
see
I
get
this
error
right.
I
get
it's
refused
and
I
get
an
error
message
telling
me.
D
You
know
why
it
has
been
rejected
by
my
web
option
firewall.
There
could
be
other
options
like
you
could
block
certain
user
agents,
for
example
right.
So
if
I
have
a
user
agent
header
with
a
value
scammer,
then
I
want
to
reject
it
right.
So
again,
I
just
send
like
a
curl
request
with
this
user
agent
and
and
I
am
blocked
by
web
application
firewall.
You
have
like
a
lot
of
different
options.
I'm
just
like
giving
you
like
a
very
quick
overview.
D
Another
thing
that
is
really
nice
is
the
way
we
can
interact
with
the
request
right.
So,
for
example,
here
you
know
we
said
we
want
to
specify
like
a
rate
limit,
and
we
just
got
like
this
429
error
right.
So
it
was
not
really
nice.
You
know
you
just
get
this
error.
We
could
say
when
there
is
a
response
code.
D
429
then
I
want
to
change
the
body
right
and
I
want
to
change
the
body
so
that
now,
instead
of
like
this
just
for
29
error,
I
get
like
a
200
response,
but
I
have
like
a
body
that
display
that
in
in
a
html
format,
that
is
like
a
little
bit
nicer.
D
Oh
in
fact,
I
put
it
in
this
user
applications.
D
So
here
you
see,
I
got
this
like
modified
response,
so
that's
just
an
example
right
you
can.
You
can
do
whatever
you
want
in
term
of
transforming
the
request,
header
request,
a
response,
header
response
body
and,
and
things
like
that
right.
We
can
also
like
define
where
we
want
to
do
this
transformation.
So
now,
sometimes
it's
nice
to
do
transformation
before
we
do
authentication
or
after
we
do
authentication
or
things
like
that
right.
So
here
we
have
what
we
call
an
early
transformation.
So
that
applies
before
the
authentication
right.
D
I
will
transform
the
the
response
and
I
will
add
this
json
content
type,
and
I
will
you
know,
change
the
the
body
as
well
right,
so
I
can
just
go
there
and
if
I
you
know
like,
if
the
status
is
401
right,
I
can
simulate
that
because
I
use
the
http
bin.
So
I
want
to
get
a
421,
so
I
just
have
to
call
this
right
and
you
see
the
the
transformation
here
right
that
is
happening.
D
While,
if
I
run
like
a
normal
request
with
like
the
code
200
right,
it
will
not
like
perform
any
transformation
right.
D
We
we
have
like
a
another
example
where
we
can
take
the
data
from
another
header
using
like
a
regular
expression
and
create
a
new
editor
based
on
the
value
we
got
from
the
regular
expression
right.
So
you
can
see
here
we
can
have
in
the
request,
like
a
editor
called
x,
my
initial
header-
and
it
has
this
format
bearer
and
something
right,
and
there
is
the
regular
expression
here
and
we
want
to
create
a
new
editor
with
just
the
value
here.
D
We
want
to
remove
br,
basically
right,
so
that
could
be
very
nice
when
you
want
to
maintain
compatibility
right.
So
you
I
just
do
that,
and
here
I
send
you
know
I
just
like
again
send
a
request
here
and
you
see
my
initial
header
has
this
bearer
and
I
create
now
a
new
header
that
has
just
the
value
right.
So
again,
it's
it's
quite
useful.
We
can
also
do
that.
D
You
know
use
some
transformation
to
get
like
some
to
extract
some
information
from
the
the
token
that
we
get
after
the
authentication.
D
So
you
see
here
we
say
we
want
to
use
the
jot
filter
so
that
we
take
the
email
claim
that
is
returned
by
in
the
jot
token
after
we
authenticate
with
key
clock,
and
I
want
to
create
a
new
header
called
x,
solo
claim
email
for
that
right.
So
we
do
that
for
just
updating
again
the
service,
and
if
I
refresh
my
my
token
here,
like
my
page
sorry
so
you
can
see
here
now,
I
have
this
x
solo
claim
email
that
is
received
by
the
back-end
application.
D
So
you
remember,
we
do
the
authentication
at
the
gateway
level,
but
now
it's
important
also
for
the
back-end
application
to
know
who
is
the
user
that
has
been
authenticated
right
so
now.
It
knows
it
right,
so
it
doesn't
have
to
be
from
the
authentication,
but
you
still
need
to
know
in
many
cases
who
has
been
authenticated
right.
So
that's
a
very
typical
use
case
and
then
we
can
also,
as
I
said,
chained
together
several
steps
in
the
hot
config
right.
D
So
we
did
like
create
a
hot
config
where
we
want
to
do
authentication
with
key
clock
here,
but
we
can
also
have
a
second
step
which
is
performing
auto
authorization
with
opa.
So
we
just
like
update
the
hot
config
and
we
don't
need
an
opa
server.
This
is
what's
really
nice.
We
use
basically
the
opa
library
directly
in
our
external
authentication
service
so
that
you
just
need
to
provide
the
rego
policy
in
a
config
map.
D
D
So
if
I
just
do
that
and
then
we
just
take
again
the
virtual
service
we
had
before,
but
we
still
reference
this
hot
config
object,
then
you
can
see
that
here
I
can
still
access
it
right
because
I
am
like
authenticated
with
user
one.
But
let's
say
if
I
go
here
and
I
try
to
access
the
same
page.
D
D
I
wanted
to
show
you
really
the
fact
that
you
can
drive
everything
through
yammer,
that
you
can
take
advantage
of
authentication
at
the
edge
perform
authorization
as
well
and
also
like
take
advantage
of
like
a
web
action
firewall
and
and
all
these
different,
you
know
rate
limiting
and
all
these
different
things
you
can
go
to
the
dock,
and
you
will
see
like
many
guides
that
show
you
how
to
you
know,
handle
lambda
functions,
for
example
grpc
in
terms
of
security,
you
have
a
lot
of
other
options
right
in
terms
of
authentication
with
api,
key
ldap,
creating
your
own
plugins
and
so
on.
B
I
just
want
to
start
by
apologizing
for
the
video
quality
during
the
video
this
during
this
stream.
Denis
has
actually
posted
his
video
in
the
slack
channel.
So
if
you
want
to,
in
the
short
term,
have
a
look
at
any
of
the
command
line
or
terminal
in
in
a
higher
resolution-
and
you
can
do
that,
we
will
also,
when
we
post
this
recording
afterwards
on
youtube.
It
will
be
in
a
higher
resolution
as
well.
So
apologies
for
the
technical
issues
that
we
had
with
the
stream.
B
So
I
just
wanna
thanks
for
joining
denise.
I
just
wanna
have
a
look
at
and
see
if
there
are
any
questions
about
about
your
talk
that
weren't
related
to
the
video
quality.
D
Always
on
slack
right
and
you
can
find
me
in
the
cncf
slack
or
in
the
solo
slack,
so
don't
edit
to
come
here.
As
you
said,
you
know,
I
gave
the
link
for
the
recording,
so
perhaps
it
will
be
easier
to
have
questions
after
you
watch
it
with
seeing
all
the
command
lines
and
all
the
stuff,
so
don't
hesitate
to
take
the
time
to
watch
it
and
come
and
ping
me
on
either
the
cncf
slack
or
the
solo
slack,
and
I
will
I
will
be
happy
to
to
answer
any
question
there.
B
Great
well,
thank
you
very
much
for
joining
and
for
for
doing
a
talk.
E
Thank
you
for
inviting
me
for
the
talk,
and
hopefully
you
will
have
a
lot
of
other
great
sessions
moving
forward.
I'm
sure.
B
B
Ten
to
one
fifty
all
right
well
until
then
thank
you
very
much.
C
F
G
All
right,
let's
do
it
welcome
back
everybody
to
the
afternoon
sessions
for
track
two.
My
name
is
josh,
I'm
here
with
my
co-host
chris
and
as
as
always
before,
we
get
started
if
you're
not
already.
Please
make
sure
that
you
hop
on
to
slack
to
ask
any
questions
and
to
network
and
chat
with
the
the
speakers
today.
That's
slack.cncf.io
and
we're
in
the
kcd
dash
uk
channel
so
yeah
and
as
well
as
always.
G
We
have
workshops
later
this
week,
workshops.kcduk.io
and
with
that
I
will
hand
over
to
our
next
speaker.
R
sharma
ash
joins
us
from
vmware
and
he's
going
to
be
talking
to
us
today
about
a
guide
to
evaluating
dependency
updates
to
kubernetes,
and
on
that
I
will
hand
over
to
arch.
F
Hey
everyone.
I
hope
you
are
having
a
good
conference.
I'm
super
excited
to
be
here.
Can
someone
like
please
confirm
if
the
audio
and
all
right,
let's
get
started
so
hello,
everyone
and
welcome
to
kubernetes
community?
Today's
uk,
I
am
marsh.
I
work
at
vmware,
I'm
also
on
the
current
kubernetes
release
team,
and
this
session
is
going
to
be
about
how
we
evaluate
dependency
updates
in
the
upstream
kubernetes
project.
So,
let's
get
started
before
we
begin.
I
want
to
give
a
brief
overview
of
what
I'll
be
covering
in
the
talk.
F
If
you,
too,
are
looking
to
contribute
to
the
upstream
kubernetes
project,
so
first
things
first,
what
are
dependencies
well
dependencies
put
simply
are
external
packages
which
your
code
uses.
These
external
packages
are
distributed
as
modules
as
per
the
definition
of
a
module
in
go.
It
is
nothing
but
a
directory
containing
a
collection
of
nested
and
related
go
packages
with
a
go.mod
file
at
its
root.
If
you
aren't
familiar
with
what
a
go.mod
file
is,
don't
worry
I'll
be
covering
that
in
the
next
few
styles.
F
F
So
by
default,
if
you
create
a
main.go
file
and
start
writing
code
in
it,
you
won't
get
the
support,
of
course,
dependency
management
tools.
You
will
need
to
put
your
code
in
its
own
module
in
order
to
track
and
manage
the
dependencies.
You
add
you
can
do
this
by
running
the
gomod
init
command
and
then
specifying
the
name
of
your
module.
F
Once
you
put
your
code
in
its
own
module,
you'll
see
that
a
go.mod
file
appears
in
your
project
directory.
A
go.mod
file
simply
describes
the
module's
properties,
including
its
dependencies
on
other
modules
and
on
versions
of
code.
If
you
see
to
the
left,
you'll
see
that
this
is
an
example
of
a
very
simple
go.mod
file.
F
Please
note
here
that
the
go.some
file
is
auto-generated
based
on
your
go.mod
file,
and
you
should
never
need
to
edit
it
manually
to
keep
your
manage
dependency
set
tidy.
You
can
use
the
go
mod
ids
command
using
the
set
of
packages
you
have
imported
in
your
code,
for
example
julian
http
router,
here
in
this
image,
the
good
or
mod
the
gomod
id
command
edits
your
go.mod
file
to
add
the
modules
that
are
necessary
but
missing.
F
It
will
also
remove
the
unused
modules
that
do
not
provide
any
relevant
packages,
so
say
in
this
example
of
code
on
mod
file.
If
you
stopped
using
example.com
this
module,
then
when
you
run
gold
mod
id,
it
will
remove
that
line.
Lastly,
it
will
also
regenerate
the
go.some
file
based
on
the
updated
code.mod
file.
F
F
F
So
I
think
when
it
comes
to
dependencies,
it's
safe
to
say
the
lesser,
the
better
now
this
does
not
mean
that
you
should
try
implementing
the
functionality
of
each
package
on
your
own.
The
reason
I
say
that
lesser
dependencies
are
better
because
it's
because
it'll
mean
that
you
have
to
keep
track
of
fewer
releases
of
dependencies
for
your
project
and
you
have
a
much
easier
time
updating
those
dependencies
now.
F
F
F
F
Well,
what
does
directly
using
them
mean
if
we
go
back
to
the
previous
code,
we'll
see
here
that
we
import
julian
schmidt,
http,
router
and
then
in
the
first
line
under
the
main
function.
We
use
it
now.
If
this
module
internally
uses
something
else
to
do
what
it
does.
We
do
not
care
about
that,
even
though
that
is
technically
essential
for
our
code
to
function
properly.
F
F
F
The
next
thing
in
the
output
is
the
total
number
of
dependencies
of
our
project,
which
is
pretty
self-explanatory.
Apart
from
one
slide
caviar,
even
though
here
in
this
example,
you
see
that
the
sum
of
direct
dependencies
and
transitive
dependencies
is
equal
to
the
total
number
of
dependencies.
It
does
not
necessarily
have
to
be
so
now.
Why
is
that
simply
because
a
dependency
can
be
both
a
direct
as
well
as
a
transitive
dependency.
F
This
is
best
explained
with
the
following
example.
Let's
say
our
module
depends
on
this
module
called
woof,
which
internally
uses
this
other
module
called
meow,
but
our
module
also
directly
uses
meow
so
like
in
the
previous
code,
we
were
importing
julian
schmidt,
http
router.
We
are
importing
meow
and
wolf.
The
only
thing
is
that
woof
also
internally
uses
meow.
F
So,
in
this
case,
meow
would
be
both
a
direct
as
well
as
a
transitive
dependency
of
our
project.
So
if
you
see
in
this
example,
the
number
of
direct
dependencies
is
two
wolf
and
male.
The
number
of
transitive
dependencies
is
one
just
now,
but
the
total
number
of
dependencies
is
not
two
plus
one.
Rather,
it
is
just
two
which
are
wolf
and
male.
F
The
final
thing
in
the
output
is
the
max
depth
of
dependencies,
which
is
nothing
but
the
length
of
the
longest
dependency
chain.
Going
back
to
the
previous
slide.
We
see
that
there
are
two
dependency
chains
here.
One
is
from
our
module
to
wolf
to
male,
which
has
a
length
of
three
and
the
other
is
from
our
module
to
meow,
which
has
a
length
of
two.
So
in
this
case
the
max
steps
of
dependencies
would
have
been
three.
F
F
F
The
graph
sub
command
also
comes
with
a
useful
flag
which
lets
you
specify
a
particular
dependency
whose
chains
you
want
to
be
highlighted.
So
let's
say
you
only
want
to
see
the
chains
which
have
github.com
text
in
him.
Then
you
can
run
the
command
shown
and
you
would
see
an
output
similar
to
what
you
see
right
now.
F
The
third
sub
command
that
f
stack
provides
us
with.
Is
the
cycles
sub
command?
What
this
does
is
show
all
the
cycles
present
in
the
dependencies
of
a
project,
an
example
of
cycles
in
project
dependencies
is,
if
a
depends
on
b,
which
depends
on
c,
which
again
depends
on
a
so
here.
For
this
simple
project
I've
been
using
till
now,
you
see
that
the
cycles
in
the
dependencies
are
due
to
x,
net,
depending
on
x,
crypto
and
vice
versa.
F
F
F
What
is
brow
you
might
ask
brow
is
a
kubernetes-based
ci
cd
system,
brow
jobs
can
be
triggered
by
various
types
of
events
and
remote
report,
their
statuses
to
many
different
services.
To
put
it
in
very
simple
terms,
and
in
the
context
of
this
talk,
pro
is
basically
responsible
for
running
certain
tests
on
your
prs
that
are
made.
It
can
also
run
these
tests
on
the
master
branch
of
the
gubernators
repository
for
depstat.
We
have
two
proud
jobs.
F
So,
if
your
pr
changes
dependencies
brow
would
catch
that
and
run
the
check,
dependency
stats
job,
which
would
give
an
output
similar
to
the
one
you
see
on
right
here.
For
this
particular
pr,
the
number
of
direct
dependencies
was
being
changed
by
one,
which
is
what
debstar,
captured
and
reported.
F
So
now,
finally,
to
conclude,
I
want
to
touch
a
bit
upon
how
I
got
the
opportunity
to
work
on
this
project
as
a
student
and
how
you
too
can
get
started
with
the
help
of
various
mentorship
opportunities.
The
kubernetes
community
provides
I
applied
to
this
program
called
the
linux
foundation
mentorship.
F
F
This
is
why
I
would
highly
encourage
everyone,
regardless
of
their
existing
knowledge,
to
apply
to
these
opportunities
if
you're
looking
to
get
started,
contributing
to
open
source
projects
like
when
it
is.
My
only
request
would
be
that
please
do
not
self-reject
when
applying
to
these
mentorships
thinking
that
you
do
not
know
enough
as
names
perfectly
put
it.
We
are
all
learning
all
the
time,
so
please
do
not
hesitate,
asking
questions
or
applying.
F
F
F
There
is
also
the
google
summer
of
code
program,
which
runs
once
a
year,
and
this
is
only
for
students.
Personally,
I
have
been
part
of
all
three
of
these
and
I
can
easily
say
that
they
have
helped
me
learn
and
grow
a
lot.
You
can
read
about
more
such
opportunities
by
visiting
the
link
on
this
slide.
F
So
this
was
it
from
my
side
and
thank
you
so
much
for
attending.
If
you
have
any
questions
like,
please
feel
free
to
put
them
on
slack
or
reach
out
to
me
on
twitter
or
drop
a
mail.
You
can
find
the
links
to
these
slides
at
sharma.com
talks
and
once
again,
thank
you
so
much
for
attending
and
I
hope
you
learned
something
new
out
of
this
session.
G
Awesome
thanks
very
much
arch.
That
was
fantastic,
so
I
guess
one
question
for
me:
I
don't
think
I'm
not
sure
we
have
any
questions
in
the
slack
at
the
moment,
but
one
question
from
me
would
be
you
know.
Obviously
there's
there's
a
bit
of
a
complication
with
kubernetes,
not
with
go
when
it
comes
to
dependencies,
because
you
have
this
distinction
between
packages
and
dependencies
packages
and
modules.
Sorry
does.
Does
the
depth
stat
deal
at
all
will
be
at
the
package
level?
Was
it
just
at
the
sort
of
module
level.
G
Brilliant
chris,
anything
from
from
your
side.
H
Oh,
that
was
awesome.
It
was
great
to
get
some
background
and
it's,
I
think,
it's
always
good
to
encourage
folks
to
get
involved
in
the
community.
I
think
we
talk
about
open
source
about
being
a
big
community,
but
sometimes
it
feels
like
there's
quite
a
step
up.
You
know,
I
think
the
the
go-to
for
most
people
is
to
contribute
to
this.
I
need
to
be
able
to
write
low
level
go
code
or
some
to
be
able
to
contribute
thousand
lines
of
code.
H
The
simplest
way
to
contribute
is
just
to
get
involved,
and
maybe
just
do
some
few
typos
and
you
know,
go
obviously
the
obvious
one
is
the
doctor
pages,
but
also
even
just
contributing
to
the
actual
source
stuff.
As
I
said,
you
know
coming
from
a
non-go
background
and
very
quickly
being
minted
and
so
on.
H
I
think
it's
a
so
one
thing
I
love
about
this
community
and
not
just
we
talked
we
felt
fondly
about
the
the
actual
community
to
meet
outside
of
the
communities
earlier,
but
I
think
the
actual
social
side
of
things
with
the
whole
community
is
really
strong.
It's
it's
so
welcoming
anyone
from
any
any
background
can
come
into
there
and
to
and
again
to
echo
ash's
comments.
Don't
let
your
imposter
syndrome
get
the
better
of
you.
H
You
know
everyone
is
welcome
and
once
you
get
in
there,
you
realize
that
none
of
us
know
what
we're
doing
so.
We
all
started
from
from
somewhere
with
a
complete
blank
slate
and
there's
a
good
chance
that
you've
got
something
to
bring
to
the
table
that
we
don't
have
already
and
the
community
doesn't
have
already
so.
G
But
yeah
awesome
well
we're
a
little
bit
ahead
of
schedule,
so
we've
got
a
little
bit
of
time.
So
what
we'll
do
is.
H
F
G
Yeah,
open
source
can
be
really
can
seem
really
intimidating,
but
I
think
one
thing
I've
learned
especially
from
the
kubernetes
community
is-
is
that
it's,
it's
surprisingly
welcome
welcoming
to
new
people
who
are
looking
to
contribute.
It's
not
always
going
to
be
like
the
fastest
experience.
You
know
there
are
something
like
2000
or
more
open
prs
on
the
kubernetes
repository
right
now,
but
when
someone
does
eventually
get
around
to
looking
at
what
you've
contributed,
they're,
always
there
and
willing
to
help
so.
G
In
that
case,
chris,
you
were
going
to
yeah.
H
Just
I
I
know
we
got
about
five
minutes
before
the
next
talk,
so
I
don't
know
if
it's
worth
just
quickly
touching
on
some
of
our
sponsors.
You
know
we
we
added
some
hiccups
this
morning
to
be
lost
a
little
bit
of
time.
So
I
guess
I
mean
I'm
going
to
talk
about
tram
shed
because
I
think
they're
doing
a
fantastic
job
of
hosting
us.
Tramshed
tech
is
home
of
home
to
startups
from
the
digital,
creative
and
tech
communities.
H
They
are
based
in
cardiff,
so
very,
very
happy
to
have
you
know
some.
Some
uk
based
companies
involved
in
this
they're
also
opening
sites
in
newport
and
swansea
very
soon
as
well.
No
matter
where
you
are
in
your
entrepreneur,
entrepreneurial
journey
tram
shed
tech
can
help
support.
You
grow
your
business
by
offering
great
spaces
access
to
skills
and
training,
help
with
business
growth
and
raising
capital.
They've
got
a
12
week,
startup
academy,
supported
by
google,
and
it's
currently
open
to
applications.
H
You
can
apply
at
academy.tramshedtech.com
I'll
reach
out
to
them
through
the
usual
social
channels,
more
information
so
again
massive.
Thank
you
for
trying
tram,
shed
tech
for
helping
host
and
present
this.
It's
been
absolutely
awesome.
They
are
a
great
bunch
and
they've
also
got
a
fantastic,
a
fantastic
location.
I
think
it's
where
they,
the
cardiff
watch
party,
is
at
the
moment
and
we'll
be
running
the
the
pop
quiz
from
later
so
really
great
bunch
for
a
branch
of
process.
Folks.
G
And
on
that
note
I'll
take
a
moment
to
call
out
another
one
of
our
sponsors
who
actually
two
of
the
people
on
this
call
right
now,
chris
and
our
next
speaker,
tom,
are
members
of
this.
This
organization,
so
just
want
to
shout
out
real
quick
systig.
The
sysdig,
secure
devops
platform
provides
security
security.
G
Sorry,
the
australian
came
out
in
me
there
to
confidently
run
containers,
kubernetes
and
cloud
secure
the
build,
detect
and
respond
to
threats
and
continuously
validate
cloud
configurations
and
compliance,
maximize
performance
and
availability
by
monitoring
and
troubleshooting
infrastructure
and
services.
Statistic
is
a
software
as
a
service
platform
built
on
an
open
source
stack.
H
Great
stuff,
thanks
for
that
josh
thanks
for
the
plug
there.
So
so
yes,
so
it
gives
me
great
honor
to
you
know.
I
got
to
talk
to
one
of
my
friends
earlier
today
when
when
katie
chong
was
talking
and-
and
I
get
the
opportunity
again
to
introduce
a
good
friend
of
mine
tom
who
works
in
our
in
that
sorry,
the
systig
uk
team
and
today
he's
going
to
be
talking
about
collaborative
security
with
falco
so
tom
over
to
you.
I
Okay
straight
into
it
yeah.
Let
me
just
share
my
screen
in
that
case.
Let's
hit
that
and
present
so
are
we
sharing
successfully.
I
Excellent,
so
yes,
thanks
for
the
intro
chris
as
yeah
as
chris
mentioned
yeah,
so
I
work
alongside
chris
insisting
and
the
focus
today
is
going
to
be
on
the
falco
project,
which
is
one
of
the
open
source
projects
that
was
originated
in
cystic,
but
we
donated
it
to
cncf,
and
it's
now
very
much
a
cncf
run
project
that
we
still
contribute
heavily
to
and
falco
is
very
much
focused
on
runtime
security
in
the
cloud
native
space,
so
we'll
delve
into.
I
Essentially,
this
is
an
introduction
to
falco
what
it
is,
what
it
does,
how
you
can
use
it,
how
you
can
get
involved
with
it.
I
will
speak
a
little
bit
at
the
end,
just
a
bit
of
a
plug
since
we're
sponsoring
the
event.
I
Just
to
tell
you
a
bit
more
about
what
systig
does
as
well
in
using
falco,
but
the
bulk
of
the
talk
will
be
on
the
open
source
aspect
of
it,
starting
you
know,
taking
a
step
back
and
thinking
about
approaches
to
securing
systems,
though
so
fundamentally,
there
are
two
things
you
can
do.
This
there's
the
prevention
side
of
things
where
you're
actually
trying
to
modify
the
behavior
of
a
system
by
you
know
stopping
things
from
happening,
so
you
know,
could
be
killing
processes
it
could
be
just
blocking
system
calls.
I
You
know
the
actual
doing
things
on
the
system.
The
other
aspect
is
detection,
where
you're
looking
at
what's
happening,
live
on
a
system
and
then
looking
to
detect
things
that
look
suspicious.
So
you
know
it
could
be.
A
process
suddenly
starts
talking
to
a
new
network
or
accessing
files
and
modifying
binaries
with
you
know
those
sorts
of
strange
behaviors
and
picking
up
on
those
two
and
they're
not
they're,
two
different
approaches
and
you
need
both
in
assistance
and
you'll
need
prevention,
style
systems,
you
know
and
guard
rails
and
so
on.
I
You
also
need
detection
capabilities
within
your
systems
as
well,
because
you
know
the
prevention
is
only
as
good
as
the
rules
you've
thought
of
detecting
when
there
is
suspicious
behavior
is
a
much
harder
thing
to
do,
but
it's
also
one
of
the
key
things
that
you
need
to
do
to
find
where
you
have
zero
day,
exploits
or
possible
malicious
actors
within
an
environment
often
the
only
way
you'll
detect.
Those
is
by
looking
for
suspicious
behavior.
I
So,
just
looking
at
a
few
examples
of
tooling
within
the
linux
world
that
deals
with
this,
these
kind
of
security
enforcement
capabilities,
you
have
things
like
sec,
comp
and
sec
comp
bpf,
which
are
involved
in
you
know.
Filtering
system
calls
so
you're,
actually
blocking
system
called
se
linux,
which
is
very
much
a
you
know,
a
linux
security
suite
and
you
know
it's
vastly
capable
in
terms
of
doing
mandatory
access,
control
and
label
based
security
profiles
and
so
on.
I
They
shouldn't
and
you
need
all
of
this
stuff,
but
it's
quite
difficult
to
get
it
correct
and,
as
we
add
more
layers
of
complexity,
as
we
add
kubernetes
and
you
know,
container
runtimes
and
so
on
and
dynamic
workloads
getting
that
balance
right
of
blocking
stuff
and
securing
things,
whilst
also
enabling
those
workloads
to
do
what
they're
supposed
to
do
and
not
slowing
down
your
developers,
not
slowing
down
you
know
not
getting
in
the
way
is
a
difficult
thing
to
do
and
then
looking
at
the
on
the
detection
side
of
things,
we
have
things
like
the
kernel
kernel,
audit
d,
and
you
know,
if
you
look
at
what
sc
linux
does
he
actually
sends
audit
events
at
the
point
where
it
detects
denials
but
they're
just
that
sort
of
auditing?
I
What's
happening,
live
on
your
system
just
getting
that
audit
log
out.
Kubernetes
also
has
an
audit
logging
system
and
falco,
which
is
what
we're
going
to
talk
about
predominantly
today
fits
into
that
detection
side
of
the
security,
so
it
falco
itself
doesn't
do
anything
in
terms
of
prevention,
it's
very
much
taking
a
stream
of
data
and
looking
for
anomalous
activity
within
that
stream.
I
As
you
can
see
from
there,
you
know
the
list
of
tools
that
do
this
kind
of
detection.
There's
still
there's
a
lot
to
be
done
in
this
space,
and
you
know
we're
working
in
inside
systing
on
a
lot
of
things
like
you
know,
machine
learning,
capabilities
to
be
able
to
piece
together
the
anatomy
of
an
attack
across
multiple
steps.
So
you
know
you
if
you
see
this
behavior
followed
by
this
behavior,
and
then
you
know
this
could
be
a
an
escalation
followed
by
a
lateral
movement.
I
Those
sorts
of
attack
vectors
we're
trying
to
piece
that
together
and
from
a
detection
point
of
view,
becomes
extremely
important
as
and
yeah
as
you
see
in
the
bottom
of
the
slide
there,
you
can't
rely
just
on
prevention,
so
assuming
that
you've
put
all
the
prevention
in
place
and
that
you're
done
is
never
enough.
You
need
the
detection
rules
in
there
as
well
and
yeah.
Just
as
an
analogy
to
that,
you
know
I've
locked
all
my
doors,
but
what?
I
If
somebody
you
know
if
I
leave
one
open
or
if
somebody
manages
to
break
a
window,
then
if
I've
got
a
dog
in
my
house,
then
that
dog
can
alert
me
to
what's
going
on.
So
that's
you
know
it's
it's
the
two
that
you
need
both
are
useful
you're
not
going
to
rely
on
your
dog
to
secure
your
house,
but
you
can
you
know
you
can
use
one
to
detect.
What's
going
on,
you
get
the
idea.
I
So
moving
on
to
falco
and
what
it
does
so.
The
first
aspect
that
falco
deals
with
is
the
system
called
stream
coming
from
the
linux
kernel
and
the
reason
that
we,
this
was
chosen
as
an
information
source
is
because
it's
fundamental
to
everything:
that's
happening
on
a
linux
system.
So
if
you
look
at
trying
to
secure
a
container
workload
on
us
running
in
kubernetes
on
a
host,
there's
a
whole
lot
of
layers
there,
but
fundamentally
you've
got
a
process.
I
That's
running
on
a
linux
kernel
and
in
order
to
interact
with
the
outside
world,
it
has
to
make
system
calls
and
so
that
system
called
stream
is
going
to
tell
us
an
awful
lot
about.
What's
going
on
in
that
host
and
what's
that,
what
that
process
is
up
to,
and
so
then,
by
starting
from
that
kernel
system
call
we
can
see.
You
know
what
processes
are
running.
I
What
I
owe
them
is
going
on
there,
what
network,
they're,
accessing
and
so
on,
and
you
know,
there's
a
complete
picture
that
you
can
build
up
just
from
that
system
called
stream.
So
it's
an
extremely
rich
source-
and
this
was
one
of
the
fundamental
things
that
systig
started
out
doing
was
tapping
into
that
system
called
stream
and
making
sense
of
it
and
tagging
that
data
with
all
the
the
kind
of
the
layers
or
as
you
build
up.
So
you
know
this
is
a
process
it's
running
in
a
container.
I
That
container
is
running
as
part
of
in
this
pod,
which
is
part
of
this
deployment,
which
is
in
this
name
space.
All
of
that
sort
of
complete
layering
up,
but
fundamentally,
if
you've
got
that
system
called
that's
where
we
start
from,
and
so
one
of
the
things
that's
just
that
falco
is
able
to
do
is
instrument
at
that
kernel
level,
and
it
does
this
in
two
ways,
so
you
can
either
insert
a
kernel
module
and
there
are,
and
we've
open
sourced
this
kernel
module
as
well.
I
So
you
can
then
insert
that
into
your
kernel
that
enables
a
ring
buffer
that
exposes
that
system
called
stream
into
user
space,
which
is
then
fed
into
falco
or,
conversely,
for
more
modern
kernels,
where
we
have
eppf,
which
is
where
we'll
be
moving
in
future,
is
using
an
ebpf
probe,
which
then
gives
us
that
standard
interface.
So,
rather
than
having
to
insert
modules
into
your
kernel,
you
can
just
pull
that
same
data
stream.
I
Out
of
that
out
of
the
kernel,
either
way
we're
getting
the
full
system
called
stream
and
then
feeding
it
into
falcon,
and
then
you've
got
a
couple
of
libraries
in
there.
So
the
best
cap
and
lib
esp
essence,
which
are
the
the
analysis,
libraries
that
then
will
categorize
and
you
know
and
structure
that
data
as
it
comes
out
and
then
we'll
have
a
rule
set
that
you
can
give
to
falco,
which
you
can
then
use
to
spot
particular
behaviors.
I
So
it's
pattern
matching
essentially
against
that
data
stream,
within
that
the
second
source
that
we've
we're
able
to
ingest
with
cystic
er.
Sorry
with
falcon,
is
the
kubernetes
audio
logs.
So
this
is
a
standard
api.
That's
coming
a
standard
feature
of
the
kubernetes
control
plane.
Now,
where
all
security
relevance
data
is
being
dumped
out
of
this
stream
in
chronological
order,
and
so
we
can
see
all
of
the
activity
generated
by
users,
applications
etc
for
all
api
interactions
that
are
security.
I
I
That's
going
through
cube,
ati
api
to
as
the
front
end
to
that,
and
then
all
manipulations
that
are
going
into
that
come
out
of
that
audit
log
and
again
so
this
is
now
a
very
rich
data
stream
of
you
know
all
deployments
that
are
happening
any
edits
to
objects,
config
maps,
creation
of
secrets,
anything
that's
being
deleted.
I
All
of
those
sort
of
security,
relevant
data
is
all
included
in
that
audit
log
and
it's
being
and
can
be
fed
out
to
sys
to
falco
so
that
it
can
ingest
it
and
then.
Finally,
this
is
a
fairly
new
capability
that
we've
added
recently
where
falco
is
now
starting
to
be
able
to
ingest
cloud
logs.
So
aws
cloud
trail
was
the
first
one
where
we
have.
You
know
all
of
the
audit
logs
from
your
aws
cloud
accounts.
I
So
this
could
be
things
like
you
know:
creating
elbs,
exposing
s3,
buckets,
im
role,
creation,
etc.
Users
logging
into
your
account
and
then
what
do
they
do
when
they
log
in
all
of
that
kind
of
low-level
cloud
account
activity
is
a
new
data
stream
that
we
can
ingest
so
aws
is
there.
Gcp
is
coming
and
azure
very
shortly
and
I
believe
gcp
may
already
be
released
and
again
what
this
gives
us
is
yet
another
data
source
that
you
can
ingest
with
falcon
and
then
look
for
suspicious
anomalous.
I
Behavior
in
terms
of
you
know,
what's
being
deployed
into
your
aws
account
what
edits
are
being
made
to
the
resources
within
that?
Are
people
manipulating
policies?
Are
they
deleting
logs?
You
know
all
of
that
sort
of
stuff
that
you
can
then
look
again
for
writing
rules
against
trying
to
spot
anomalous,
behavior
so
bringing
this
all
back
together.
In
terms
of
the
falco
architecture,
so
you
can
see
that
if
we
go
kind
of
bottom
left
upwards,
so
we've
got
data
being
ingested
into
a
falco
instance.
I
So
that
could
be
from
the
kernel
on
the
host
where
you've
got
falco
running
or
it
could
be
external
data
sources
coming
in
via
http
rest
apis,
so
that
you
know
the
cloud
audit,
logs
or
kubernetes
audit
logs,
which
are
being
fed
into
those
that
ingestion
service.
That's
then
categorizing
that
data
and
making
it
in
usable
and
that's
then
fed
into
the
filter,
expressions
which
are
built
from
the
falco
rules.
I
We
can
then
trigger
an
alert,
and
so
this
is
why
you
know
so.
Falco
is
very
much
on
the
alerting
side
of
and
detection
of
that
security
structure
and
then
alerting
can
feed
to
a
number
of
outputs.
It
could
go
to
syslog
files,
shell
or
grpc.
I
There's
a
very
good
open
source
project
called
falco
sidekick,
which
we'll
talk
a
bit
more
about
which
can
which
you
can
ingest
falco,
and
then
that
has
integrations
with
all
sorts
of
third-party
capabilities,
and
you
can
also
do
things
like
create
prometheus
metrics.
You
can
sort,
you
know,
push
out
metrics
from
it.
I
So
fundamentally,
falco
rules
are
just
yaml,
and
so
you
can
see
there's
an
example.
Falco
rule
on
the
screen
here
and
the
rules
can
be
so.
The
rule
set
can
be
made
up
of
macros
and
lists,
and
so
on.
So
you
can
create
a
macro
and
then
call
that
macro
within
another
rule
and
so
on.
I
So
you
can
see
in
this
rule
here
we
rule
file
here
we've
got
a
few
macros
being
created,
so
I'm
looking
for
things
that
are
a
container
looking
for
things,
writing
to
fifos
and
so
on
so
and
each
of
these
macros
has
a
condition.
So
that's
a
what
are
we
looking
for
in
the
pattern
match
and
one
of
them
is
looking
for
a
container
id.
I
Other
conditions
are
looking
at
the
process
command
line,
and
then
here
we've
got
our
full
rule
for
container
drift
detection.
So
this
is
essentially
it's
an
event.
Type
is
open
or
create
and
open.
Exec
is
true.
It's
a
container,
it's
not
writing
to
a
fifo
and
it's
not
writing
to
valid
docker
and
the
raw
res
is
greater
than
or
equal
to
zero.
So
then,
essentially,
this
is.
I
Can
we
spot
a
new
file
being
opened
a
certain
binary
essentially
created
within
a
container,
and
if
I
click
on
this,
hopefully
this
will
switch
me
into
a
nice
pre-record
of
this.
So
I
decided
not
to
anger
the
demo
gods
and
try
and
actually
do
a
live
demo,
but
this
will
play
us
a
nice
little
execution
of
that
rule
that
we
just
saw
so
essentially
we
already
have
the
rule,
so
we
sent
falco
comes
with
a
default
rule
set,
which
is
quite
large
and
complex.
I
So
for
this
demo
it's
just
been
cut
down
to
just
that
single
yaml
rule
that
we
have
in
our
capability
so
that
we
can
just
run
falco
with
that
rule
and
then
create
a
container
and
trigger
the
detection
of
our
suspicious
behavior.
I
And
you
can
see
that
there
we
go,
we've
created
a
falco
instance.
That's
now
there
and
running,
and
now
we're
going
to
spin
up
a
container
and
within
that
container,
we're
then
going
to
create
a
new
executable
file
and
then
try
running
it
so
created
a
file
created
a
container
we're
now
running
in
the
shell
inside
the
container
and
my
colleague
that
developed
this
forgot
that
he
doesn't
have
vim
inside
his
container.
So
he
has
a
couple
of
aborted
attempts
to
create
the
file.
I
Then
he
realizes
what
he's
doing
so
yeah
so
he's
creating
a
simple
c
program
that
is
literally
just
going
to
generate
a
binary
that
opens
a
file.
I
And
then
we
compile
that
and
then
run
8.8
the
output
of
the
compiler,
and
you
can
see
straight
away
when
that
executable
ran
inside
the
container.
It's
now
triggered
my
rule
in
falco,
and
I've
got
that
detection,
as
I
you
know,
which
has
got
the
variable
substitution
in
it.
So
you've
got
the
context
of
what
was
going
on,
and
so
you
know
it's
pretty
simple,
so
this
is
just
in
a
single
standalone,
falco
local
container
running
on
its
own.
I
But
you
know
in
terms
of
that's,
you
know
sort
of
my
first,
my
first
falco
rule,
but
you
get
the
idea
of
you
know.
This
is
fairly
simple.
When
you
know
it's
the
classic
thing
of
try
and
do
one
thing
well,
which
is
what
falco
aims
to
do,
which
is
that
detection
piece
giving
you
that
rich
capability
of
starting
from
a
very
fundamental
information
source
and
building
up
from
there
in
terms
of
then
developing
policy
and
having
rules
to
do
things?
I
As
I
say,
falco
comes
with
a
default
rule
set
which
gives
you
quite
a
lot
of
additional
rules,
not
just
the
one
we
saw
there.
There's
also
the
cloud
native
security
hub,
which
is
a
cystic
sponsored
website
where
we
have
policy
being
developed
and
so
on
there.
You
can
go
in
and
look
for
specific
use
cases
and
find
falco
rules
amongst
other
things,
and
you
can
see
you
know,
there's
things
there
for
fluently
elastic
and
so
on.
I
There's
also,
quite
often
when
a
new
cve,
a
critical
cv
is
discovered
that
has
a
large
impact.
There
will
be
a
blog
post
written
by
somebody
at
systig
who
then
will
develop
a
specific
falco
rule
to
detect
the
exploit
of
that
cbe,
and
then
that
will
be
pushed
out
to
this
as
well.
So
you
know
with
this
over
time.
This
resource
should
be
building
up,
but
you
can
contribute
to
this
as
well.
So
you
see
on
yeah
on
the
site.
I
There
is
the
ability
to
send
your
own
rule
suggestions
if
you
want
to
so
that
you
know
you
can
help
the
community
build
up
this
database
of
capability
in
applying
falco.
I
Installing
it
is
fairly
straightforward,
so
debian
or
red
hat
systems,
there
are
rpms
and
d
packages
available,
so
you
can
just
app
get
or
yum
install
it.
If
your
distro
doesn't
have
those
available,
then
you
can
just
run
a
shell
script
that
will
pull
down
the
latest
binary
and
install
it
for
you
into
locally.
I
And
finally,
you
can
you
can
run
it
as
a
docker
container
as
well.
So
you
can
just
pull
the
falco
security
falco
image
and
there's
full
instructions
on
how
to
do
how
to
install
it
and
run
it,
and
you
know
just
play
around
with
it
on
that
doc
on
the
falco.org
website.
I
If
you
want
to
install
it
onto
kubernetes,
then
we
also
have
home
charts
available
or
you
can
just
go
and
grab
the
daemon
set
and
apply
that
directly
so
depending
on
which
you
prefer,
but
again
so
running
falco.
As
a
system
d
process
gets
you
a
better
isolation,
and
you
know
it
means
you're
running
low
level
on
the
host
outside
of
the
container
runtimes.
I
Actually,
when
it
comes
to
kubernetes,
you
can
then
just
deploy
it
out
as
a
demon
set
onto
the
kubernetes
cluster.
That
demon
set
will
require
privileged
access,
of
course,
because
it
is,
it
still
needs
to
hook
into
the
kernel
on
the
host.
So
it's
going
to
require
that
root
level
access
to
do
that.
I
And
then,
in
terms
of
additional
things,
so
this
this
is
just
an
and
this
slide
deck
will
hopefully
be
made
available,
but
there's
a
whole
load
of
additional
tools
that
you
can
use
to
then
help
you
manage
and
create
and
deploy
falco
into
your
environment,
so
falco
ctl
for
just
controlling
things
sidekick,
as
I
mentioned
earlier.
So
this
has
got
a
whole
load
of
integrations
into
third-party
things.
I
So,
once
you've
triggered
a
rule,
you
can
pass
it
into
sidekick
and
that
can
integrate
into
various
other
workflows
so
that
you
can,
you
know
trigger
it
other
things
and
then
other
clients,
and
so
on
also
a
prometheus
exporter,
which
can
be
very
useful
in
terms
of
then,
if
you're
using
prometheus
as
your
monitoring
tool
as
your
monitoring
framework,
then
you
can
feed
those
events
into
prometheus
as
well
and
start
dashboarding
it
and
so
on
so
yeah,
just
a
quick
mention
of
sidekick.
So
this
is
a
community
contribution.
I
It
came
from
somebody
not
connected
with
assisting
at
all
who's
developed
this
as
a
way
to
integrate
falco
into
things.
Like
slack
elastic.
You
know
you
can
see,
there's
a
whole
load
of
lists
of
integrations
on
there,
so
it's
extremely
powerful
and
capable
and
yeah
there's
a
growing
list
of
that,
because
there's
been
a
nice
architecture
for
adding
these
integrations
into.
I
So
that's
extremely
useful
and
you
can
see
that
yeah.
What
just
just
the
explanation
of
what
that
falco,
sorry
that
sidekick
capability
is
going
to
do
is
actually
then
take
the
events
out
of
falco
and
publish
them
into
you
know
an
sms
topic
or
a
team's
channel
or
whatever
else
you
want
to
push
that
event
to,
and
that
could
then
have
you
know
third
part
additional
workflows
triggered
from
that
event.
I
So
that
could
be,
you
know,
execute
a
reaction
like
killing
the
offending
pod
or
so
on
and
also
send
on
additional
notifications
elsewhere.
So
it's
it's
first
step
to
building
that
enterprise
level
security
capability.
I
If
you
want
to
build
one
by
yourself,
another
thing
you
can
do
so
cms
are
the
security,
information
and
event
management.
So
there's
a
number
of
products
out
there
available
for
doing
this
kind
of
event:
management
where
you
essentially
they'll
all
ingest,
a
stream
of
events
from
various
sources
and
then
help
hope
to
correlate
events
and
respond
to
things
when
you
have
a
security
breach.
I
If
you're
looking
to
do
that,
building
with
your
own
open
source
stack,
then
you
can
use
efk
for
that.
So
you
can
push
the
events
out
of
falco
and
then
use
that
to
construct
your
own
seam.
So
if
you
are,
you
know,
are
they
build?
It
yourself
bent
then
please
do
that
and
then
you
can
see
that
yeah
you've
got.
You
can
start
looking
at
generating
those
charts
and
pictures
of
what's
going
on,
and
you
know
what
rules
are
being
triggered.
I
So
you
can
then
start
seeing
threat
maps
of
what's
happening,
live
on
your
systems
and
so
on
and
what's
going
on
over
time
and
there's
a
whole
load
of
resource
out
there.
So
again,
there's
a
whole
lot
of
links
here
that
we
will
be
able
to
provide
you
with
after
the
event
talking
about
you
know
the
various
technologies
that
falco
is
built
on
things
about
ebpf
and
so
on
and
various
other
sources
of
information
around
falco.
I
It
is
a
growing
project
in
terms
of
both
you
know,
usage
and
contribution,
and
you
know
the
momentum
that
has
is
growing.
So
please,
if
you
want
to
get
involved,
there
are
a
number
of
ways
you
can
contribute
so
falco
rules.
As
I
mentioned,
we
have
that
security
hub
where
we're
always
looking
for
new
use
cases.
So
if
you
start
using
falco,
you
develop
something
yourself
and
you
think
it's
useful
to
the
community,
please
to
contribute
it
back.
I
If
you
want,
if
you
have
a
new
integration
that
you'd
like
to
see
in
sidekick,
well,
you
know
you're
free
to
develop
it
yourself
and
contribute
it
back
as
well,
and
then
we're
always
looking
for
additional
use
cases,
workflows
third-party
integrations
documentation
and
so
on.
So
it's
yeah.
If
you
would
like
to
contribute,
then
there
will
always
be
plenty
for
you
to
do
so.
Please
contact
the
project,
maintainers
and
they'll
be
happy
to
give
you
something
to
do
so.
How
are
we
doing
for
time?
I
Yeah,
just
a
few
minutes
left.
So
just
a
quick
word
about
so
I
work
for
systig,
as
does
chris,
and
while
we're
the
the
originators
of
falco,
it's
now
cncf,
but
we
still
use
it
heavily
within
what
we
do.
I
So
we
start
with
falco
as
the
foundation
for
systick
secure
where
we're
using
it
embedded
into
the
cystic
agent,
but
we're
using
exactly
the
same
three
information
sources
we've
just
been
through,
so
the
system
calls
the
kubernetes
audit
logs
and
the
cloud
runtime
audit
logs
and
what
we
do
from
that
is
we
build
up
from
there
to
create
a
secure,
devops
platform.
I
So
most
of
what
I've
been
telling
you
about,
you
know
the
third
party
integrations
and
you
know,
building
your
own
seam
and
all
of
that.
Well,
all
of
that
integration
work
and
that
policy
management
work
comes
out
of
the
box
with
systig
secure
and
we
also
add
in
additional
capabilities
like
image
scanning
for
vulnerabilities
in
this
configuration
cloud
compliance
from
a
static
point
of
view.
I
I
It
can
also
take
interventions
like
kill,
stop
pause,
containers
and
so
on,
and
then
we
also
have
integrated
incident
response,
so
we
can
trigger
captures
from
falco
rules
as
well,
which
is
another
cystic
technology
which
builds
on
the
same
capability,
where
we're
just
essentially
dumping
out
that
captcha
file
that's
generated
by
the
systick
runtime
to
then
do
post
event,
forensic
analysis.
I
So
if
you're
looking
to
adopt
cloud
native
security
within
an
organization,
you
have
the
option
of
building
it
yourself.
I
We'd
argue
that
possibly
your
time
might
be
better
spent,
actually
furthering
your
business
and
that
you
know
systig
will
give
you
a
whole
lot
of
stuff
out
of
the
box
that
you
then
don't
have
to
build
yourself.
But
it's
building
on
that
same
open
source
capability
to
run.
All
of
that.
I
So
with
that,
I
think
that
that
will
cover
all
I
wanted
to
today
I
mean
there's
a
whole
lot
of
features
in
there
in
terms
of
what
systig
has
I'd
be
happy
to
talk
to
that
talk
about
that
at
length.
But
at
that
point
I
just
open
up
and
see.
Do
we
have
any
questions.
H
I
don't
see
any
questions
on
slack
or
in
youtube,
but
folks
are
online.
Please
feel
free
to
ask
questions
in
either
of
those
portals,
as
as
tom
had
just
there.
If,
if
you're
watching
this
on
the
replay,
you
can
also
ask
questions
in
the
falco
falco
slack
channel,
which
is
over
on
the
kubernetes
slack
group
rather
than
the
cncf1
but
yeah.
Thank
you.
Thank
you
very
much
tom.
That
was
a
great
presentation
josh,
I'm
not
sure
if
you've
got
anything
else
to
add
to
that.
G
No,
no,
that
was
fantastic
thanks
very
much
tom.
In
that
case,
you
know,
that's
the
that's
the
end
of
of
this
track
for
today,
just
a
reminder
that
we
will
be
back
tomorrow
so
follow
us
on
twitter.
Keep
on
the
lookout
head
over
to
kcd
uk.io
for
all
the
information.
G
If
you
want
to
head
over
to
the
track
one
youtube,
I
will
post
the
link
in
the
in
the
youtube
chat.
You
can
also
find
it
again
on
kcd,
uk.io,
we'll
be
finishing
off
and
curry's
talk
on,
kubernetes
is
doomed
and
then
we'll
be
kicking
off
with
with
the
pub
quiz.
So
we'd
love
to
have
all
of
you
over
there
to
join
us
for
that.
But
thank
you
very
much
and
that's
chris
and
I
out
for
today.
I
think
pop.