►
Description
Monday, December 10 • 10:00am - 10:30am
Security Through the Ages - Tim Allclair & CJ Cullen, Google
Click here to add to My Sched.
https://sched.co/JJ5m
Tweet Share
First, we'll stroke your egos by reviewing the history of Kubernetes security and marvelling in the progress we've made. Next, we'll examine some of the hottest new features, and how you might be affected. We'll conclude with a call to arms by highlighting a few of the most gnarly issues on the horizon.
Presenters:
Tim Allclair, Google
CJ Cullen, Google
B
A
A
A
That
is
the
security
component,
so
whose
responsibility
is
this
so
there's
sig
off
and
that's
kind
of,
maybe
who
you
think
of
when
you
think
of
security,
nominally
siga
is
responsible
for
authentication
and
authorization
and
the
kubernetes
control
plane,
as
well
as
the
provisioning
of
some
of
the
pieces
that
makes
different
pieces
of
the
kubernetes
world
interact
with
each
other
securely,
but
there's
also
plenty
of
other
SIG's
that
are
doing
very
deep.
Security
involved
work.
A
A
We
submit
every
feature
we
develop
as
a
chance
to
improve
the
security
posture
of
kubernetes,
even
as
kubernetes
users,
you
know
as
we're
building
things
on
top
of
kubernetes
thinking
about
the
security
implications
of
the
choices
that
we
make
is
is
important
now,
and
it's
going
to
be
even
more
important
going
forward
so
this
morning,
what
we
wanted
to,
what
we
wanted
to
do
is
take
a
quick
look
back
doing.
Work
in
security
can
sometimes
be
tough.
A
It
feels
like
you,
spend
half
your
time
putting
out
the
ongoing
fire
and
the
other
half
of
your
time,
trying
to
convince
people
that
the
breaking
change
that
you
want
them
to
make
is
actually
for
their
own
benefit
and
worth
all
the
pain
that
you're
making
them
go
through.
But
we've
made
a
lot
of
progress,
so
we
we
did
want
to
call
that
out.
C
A
You
know
there
there
was
actually
security.
You
could,
you
could
reasonably
say
there
was
good
security
as
long
as
you
absolutely
trusted
every
single
piece
of
anything
that
came
into
contact
with
your
kubernetes
cluster
there
there
was
configurable
authentication
and
authorization.
Unfortunately,
a
lot
of
it
was
done
statically.
It
was
either
flags
past
two
binaries
or
config
files,
which
you
know
as
we're.
Trying
to
push
this
dynamic
system
of
of
kubernetes.
The
the
static
configuration
was
really
just
an
impedance
mismatch,
and
so
what
ends
up
happening?
Is
you
just
you
just
ignore
that
stuff?
A
You
make
it
really
easy
to
work
and
you
don't
take
the
hit
of
having
to
restart
your
API
server
every
time
you
want
to
change
authorization
policy
and
we're
trying
to
push
this
world
where
you
know
you
containerize
your
processes
and
then
you
feel
safe
from
the
from
the
world
around
you
and
in
kubernetes
one
two.
You
could
do
that,
but
we
really
hadn't
pushed
into
the
realm
of
how
do
we
control
what
people
are
doing
through
the
kubernetes
api?
A
So
again,
as
long
as
you
trusted
everybody
that
was
creating
pods,
you
can
maybe
assume
that
you
were
a
little
bit
safe
and
then,
finally,
it
was
a
very
new
world.
A
lot
of
the
ecosystem,
still
pretty
new,
still
sort
of
poking
around
figuring
out
what
the
right
way
to
think
about
the
security
model
is,
and
that
leads
to
a
lot
of
somewhat
simplistic
ways
of
building
a
your
security
model.
A
The
kind
of
canonical
one
I
wanted
to
point
out
is
assuming
that
like,
if
someone
has
access
on
the
network
that
you
might
as
well
grant
them
admin
access,
because
they
wouldn't
be
talking
to
your
network,
if
you
didn't
already
trust
them.
So
moving
from
that
into
into
more
mature
said,
if
we
fast
forward
to
more
recent,
take
a
look
at
kubernetes
112
we're
getting
a
lot
better,
there's
a
there's,
a
ton
more
protections
and
as
the
usage
of
kubernetes
is
evolving,
there's
a
ton
of
new
expectations.
A
They'll
be
able
to
build
on
top
of
these
pieces
and
then
there's
just
a
ton
of
work.
That's
gone
into
building
up
layers
of
defense,
trying
to
figure
out
what
exactly
our
security
boundaries
are
and
trying
to
enforce
them
as
best
we
can
starting
to
take
a
look
at
things.
As
you
know,
what
happens
if
this
thing
is
compromised?
What
happens
if
this
layer
doesn't
doesn't
hold?
What
else
do
we
have
there?
What
can
we?
A
What
can
we
do
to
figure
out
what
happened
after
the
fact,
and
so
there's
been
just
a
bunch
of
work
on
on
separating
out
making
those
those
one
vulnerability,
things
less
less
harmful,
and
now
it
wouldn't
be
a
proper
look
back.
If
we
didn't
also
consider
some
of
the
vulnerabilities
that
have
that
have
happened
and
that
we
fixed
over
the
past
few
years,
these
things
happen
bugs
happen.
Some
of
them
have
security
implications.
A
The
important
part
is
that
we're
ready
to
respond
to
them
and
that
we
we
respond
to
them
sort
of
in
a
mature
way,
in
that
we
can
get
up
here
and
stand
up
and
look
back
and
kind
of
have
some
fun
with
it
after
the
fact
so,
the
first
one
this
was
probably
our
first
real
thing.
It
might
have
been
the
first
time
we
had
to
go
through
the
process
of
getting
a
CV
and
stuff.
A
This
is
a
fun
one
where
you
could
submit
a
request
to
the
API,
maybe
with
some
dot
dots
in
the
name
of
your
resource.
You
were
requesting
and
convince
that
CD
to
give
you
access
to
some
stuff
outside
of
what
you
should
have
had
access
to.
So
some
was
interesting
because
it
was
kind
of
the
first
time
that
we
were
thinking
of
you
know.
A
This
kind
of
this
violates
the
expectations
of
what
people
would
think
the
API
is
is
supposed
to
let
you
do,
and
it
was
the
the
first
time
we
had
to
go
through
this
process.
Another
one.
There
was
a
vulnerability
where
you
could
take
real
sort
of.
You
know
valid
a
key
that
you
held
with
a
certificate
that
was
yours,
present
it,
but
then
just
shove,
somebody
else's
public
key
into
the
chain
that
you
presented
and
you
could
authenticate
as
them.
This
was
kind
of
our
first
worst
security
vulnerability.
It
was
the
you
know.
A
A
So
then,
this
this
next
one
is
interesting.
There
was
a
vulnerability
in
pod
security
policy
that,
through
a
nicely
crafted
pods
back,
you
could
actually
get
access
to
some
privileges
with
a
pod
that
you
shouldn't
have
had,
even
if
your
cluster
did
have
a
very
well-formed
pod
security
policy.
The
interesting
thing
about
this
one
is:
it's
only
a
vulnerability
because
of
the
changing
expectations
as
soon
as
we
introduce
the
the
pod
security
policy
model
and
the
the
thought
of
well
actually
I,
don't
necessarily
trust
everybody.
A
That's
talking
to
my
API,
then
you
have
to
consider
that
things
that
bypass
those
protections
are
now
are
now
considered
vulnerabilities.
So
if
you
just
consider
the
external
layer
of
kubernetes
as
your
as
your
shell,
this
isn't
even
a
vulnerability,
but
the
changing
expectations
make
this
more
worrisome,
and
then
this
is
a
similar
vein.
This
is
a
fun
one.
You
can
submit
a
pod
spec
that
essentially
tricks
docker
into
mounting
some
host
directories
that
you
should
not
have
had
access
to.
There's
going
to
be
a
talk
tomorrow
on
it.
A
C
A
Just
seen
this
wasn't
our
first
serious
kubernetes
vulnerability,
but
it
was
the
first
one
to
really
get
a
lot
of
traction
in
the
press.
There
were
a
lot
of
articles
written
about
this
and
I
just
wanted
to
point
out.
You
know
this
is
the
new
normal.
This
is
what's
going
to
happen
in
the
future,
so
we
should
be
ready
to
respond.
We
shouldn't
be
worried
about
these
things
happening.
They
will
happen.
D
Thanks
CJ,
you
know
so
we've
sort
of
looked
at
how
in
early
Cooper
Nettie's
were
really
focused
on
the
external
perimeter
and
then,
as
we
got
more
mature
security,
we
started
to
look
more
at
adding
additional
layers
within
the
cluster
and
starting
to
think
about
these.
These
multi
tenant
use
cases,
and
so
this
is
a
lot
of
the
work
that
we
did
in
going
into
2018
and
looking
forward.
D
It's
a
lot
the
focus
of
a
lot
of
the
work
in
20:19
as
well,
and
so
I
just
want
to
highlight
three
different
features
that
are
particularly
cross-cutting
across
the
organization
and
all
SIG's
and
might
affect
other
work
going
on,
there's
a
lot
of
other
great
security
improvements
and
work
happening
within
the
community.
But
these
are
just
the
three
that
we
kind
of
chose
to
talk
about
for
time.
D
So
the
first
I
want
to
call
out
is
the
enhancements
to
service
accounts,
so
circus
service
accounts
have
been
the
kind
of
service
identity
or
the
workload
identity
since
the
beginning
of
kubernetes.
But
there
were
a
few
problems
with
the
way
it
was
implemented
before
we
have
these
tokens.
That's
a
static
credential
that
you
can
rotate
it.
It's
pretty
manual
process.
A
lot
of
clients
aren't
really
equipped
to
handle
that
well
and
with
the
enhanced
service
accounts
they
now
have
expiration
built
in
the
cubelet
will
even
handle
auto
rotation
for
you.
D
So
if
an
attacker
steals
those,
then
they
can't
use
it
forever.
The
second
thing
is
the
original
service
accounts
were
shared
by
all
things
running
under
that
service
account.
This
meant
that
from
the
purposes
of
audit
or
from
the
control
plane,
it
all
looks
like
the
same
thing,
but
a
finished
hacker
manages
to
compromise
one
pod.
In
that
workload
they
might
get
be
able
to
compromise
that
node
so
for
better
auditing.
We
now
have
per
pod
identity
information,
that's
added
in
there.
D
We
might
be
able
to
do
more,
fine-grained
authorization
off
that
and
then
the
last
one
is
audiences.
So
there's
a
you
can
use
a
service
account
token,
not
only
to
talk
to
the
API
server,
but
to
talk
to
another
service.
If
that
service
implements
token
review,
they
can
sort
of
delegate
the
authentication
tasks
to
the
API
server.
The
problem
is
that
other
ServiceNow
has.
You
know
the
requesters
identity
token,
and
they
can
do
anything
as
if
they
were
that
identity.
D
Audiences
aims
to
fix
this
by
saying
this
token
is
only
meant
to
talk
to
say
the
metric
server
and
now
the
metric
server
can't
use
that
to
talk
to
the
API
server,
but
it's
only
for
that
one
service
and
so
a
token
request.
Kind
of
the
basics
of
these
new
service
accounts
went
in
in
112
as
beta
I
guess
it
was
alpha
before
that
and
starting
in
113.
D
You
can
now
automate
these
things
in
place
of
the
old
service
accounts
will
be
kind
of
thinking
about
how
to
roll
this
out
in
2019,
but
there's
some
kind
of
subtle
discrepancies
between
how
these
new
ones
work
and
the
old
ones
worked
and
there's
a
two
sessions
later
in
the
conference.
If
you
want
to
learn
more
about
this
in
detail,
the
second
one
I
want
to
call
out
is
near
and
dear
to
my
heart,
which
is
sandboxes
with
most
kind
of
hosts
container
sandboxing.
D
There's
a
lot
of
mechanisms
in
the
kernel
that
are
used
to
harden
the
the
container
boundary.
The
problem
is,
it's
all
still
through
the
the
same
kernel.
So
even
though
it's
a
smaller
attack
surface,
a
single
kernel
vulnerability
can
lead
to
a
complete
node
compromise
with
sandboxes.
The
goal
is
to
add
a
second
layer
of
security
boundaries
so
that
no
single
vulnerability
can
lead
to
container
escape
and
we're
doing
this
by
working
with
runtimes,
like
G
Bazaar,
cotta
containers
and
Windows
hyper-v.
D
D
But
it's
not
enough
to
just
harden
the
like
kernel
attack
surface
there's
a
lot
of
other
attack
surfaces
exposed
in
kubernetes
and
so
another
example
is
through
volumes.
So,
if
you
just
kind
of
take
cotta
containers
today
and
we're
running
that
with
an
older
version
of
kubernetes,
the
symlink
vulnerabilities
we
mentioned
earlier
would
actually
be
exposed
into.
D
You
could
potentially
mount
host
volumes
into
the
container.
So
we
need
to
think
about
how
do
we
harden
that
volume
interface
to
make
sure
that
no
single
vulnerability
there
can
compromise?
Similarly,
with
the
the
networking
side
as
well,
there's
a
lot
more
sessions
on
this
check
out
the
runtimes
track.
D
So
this
means
that
we
can
use
a
lease
privilege
model
for
the
cubelet
and
say:
hey
you
cube,
let
you
you're
not
running
a
pod
that
needs
the
secret,
so
we're
not
going
to
give
you
access
to
that
secret.
The
problem
is:
there's
this
attack,
where
the
attacker
can
kind
of
steer,
workloads
to
that
node
by
manipulating
scheduling
constraints
and
then
say
dossing
that
service,
forcing
it
to
reschedule
once
that
workload
gets
scheduled
onto
the
node.
It
now
is
granted
access
to
that
secret.
D
So
we
wanted
to
kind
of
wrap
up
by
talking
about
some
really
gnarly
issues
that
remain
in
kubernetes,
that
we
really
need
everyone's
help
to
fix.
This,
isn't
something
that
we
can
just
address
and
siga
thor.
The
product
security
team
and
the
first
of
those
I
want
to
talk
about
is
the
vendor
dependencies.
D
D
Let's
see,
and
so
that's,
oh
there
we
go
so
this
is
only
thinking
about
a
malicious
attacker
right,
they've,
they've,
backdoored,
something
and
and
yeah.
We
might
be
able
to
catch
that,
but
we
just
talked
about
earlier
how
just
software
bugs
lead
to
vulnerabilities.
This
is
where
most
vulnerable
DS
come
from
and
and
kind
of
the
security.
The
good
security
operations
is
to
just
patch
all
the
time.
Right.
Vulnerability
comes
out,
no
problem
patch.
It
ok,
we're
we're
good,
so
urban
at
ease.
We
haven't
been
doing
this.
D
This
is
just
a
selection
of
like
our
top
20
stay
list
dependencies.
These
are
things
that
are
actively
developed
and
we
haven't
updated
them
in
over
a
year,
they're
almost
a
thousand
commits
behind
head.
So
it's
a
problem.
So
what
can
we
do?
We
should
think
about
whether
we
actually
need
to
vendor
everything
that
we're
currently
been
during
are
we're
starting
to
do
things
like
break
cloud
providers
out
of
tree
into
separate
repositories
and
separate
plugins,
but
this
comes
with
another
problem
of
now.
D
We
have
you
know,
150
different,
repos,
all
vendor
and
if
ur
intrusions
of
things-
and
we
want
to
make
sure
that
everything
stays
patched
and
up-to-date,
also
increasing
visibility
of
what's
what
changes
are
going
in
upstream
and
making
sure
we
have
good
ownership
over
these
dependencies
to
keep
them
up
to
date.
So
the
next
issue
I
want
to
talk
about,
is
secure
defaults.
D
We
mentioned
this
a
little
bit
earlier,
but
why
is
this
so
hard?
I
think
Brian
touched
on
this
a
little
in
his
presentation,
but
changing
a
default
is
always
a
little
bit
contentious,
but
particularly
so
with
security
defaults,
which
a
new
security
feature
is
almost
by
definition,
a
breaking
change.
D
If
you
know
what
we're
trying
to
do
is
prevent
an
attacker
from
doing
something,
but
you
know
maybe
we're
also
preventing
and
developer,
who
is
using
that
feature
innocently
right,
and
so
we
need
to
kind
of
think
about.
How
do
we
go
about
breaking
every
one
in
the
name
of
security
and
just
to
throw
out
a
few
cheeky
examples?
I
recently
learned
that
the
cube
api
server
and
the
cubelet
actually
default
to
authorization
mode
always
allow
in
the
flags
that
was
alarming,
containers
run
as
root
and
the
API
server
ignores
the
cubelets
public
certificate
signing.
D
The
problem
is
the
defaults
that
those
cloud
providers
and
tools
are
using
are
all
different
from
each
other,
and
so
we
want
to
try
and
align
the
defaults
across
these
different
things
so
that
the
user
doesn't
have
to
actually
understand
every
setting
to
know
that
they're
getting
checking
the
right
boxes
in
one
way
we've
been
talking
about
doing.
This
is
through
security
profiles.
This
is
still
kind
of
under
under
discussion,
but
the
basic
idea
is
to
have
different
profiles
for
different
levels
of
security
and
then
be
able
to
share
those
configurations
across
the
ecosystem.
D
And,
finally,
I
want
to
talk
about
just
scaling
scaling
the
org.
We
now
have
almost
150
different
repositories
in
kubernetes
organizations
and
there's
no
way
that
the
product
security
team
can
maintain
expertise
and
all
150
of
these.
So
the
first
thing
we're
doing
is:
you
may
have
noticed
that
every
repository
now
has
a
security
contacts
and
the
root.
D
These
are
people
that
we
can
go
to
to
help
us
triage
vulnerabilities
and
to
kind
of
help
us
with
the
release
for
that
component,
but
also
it's
important
to
pay
attention
to
security
as
you're
reviewing
PRS,
in
particular,
just
kind
of
think
about.
What's
the
worst,
that
could
happen
if
there
was
a
bug
in
here
and
don't
be
afraid
to
ask
for
help
if
you're,
not
a
security
expert
or
just
want
a
second
set
of
eyes
on
some
sensitive
code
and
then
kind
of
the
third
piece
to
mention.
D
We've
learned
this
along
the
way
of
doing
kubernetes
security
releases,
but
investing
in
some
in
the
release
infrastructure
up
front.
If
you're,
not
part
of
the
kubernetes
kubernetes
release,
train
will
pay
dividends
later
think
about
how
fast
you
could
patch
a
critical
vulnerability
and
whether
you're
able
to
patch
those
things
in
private
and
to
help
us
kind
of
find
more
vulnerabilities
in
the
system
and
and
just
keep
keep
these
things
kind
of
at
bay.
We're
trying
to
launch
a
kubernetes
bug
bounty
in
2019
to
reward
security,
researchers
for
reporting
responsibly,
disclosing
vulnerabilities
to
us
yeah.
D
E
E
So
when
we
have
a
vulnerability
and
we
patch
it,
there's
normally
a
penetration
test
that
goes
along
with
that
that
can
exploit
the
vulnerability
or
those
tests
going
into
the
current
CI
system.
I
happened
to
before
I
asked
this
question:
I,
actually
bugged
Aaron,
who
just
happens
to
be
sitting
next
to
me.
He
didn't
know
the
answer
to
the
question,
so
I
figured
if
he
didn't
know
and
I
didn't
know.
We
should
probably
ask
the
question
here:
you
can
answer.
F
The
answer
is
yes:
when
we
have
embargoed
fixes,
we
have
tests
for
that.
We
have
a
private
repo
that
is
running
a
parallel
set
of
CI
tests.
We
have
tests
that
demonstrate
the
vulnerability
and
demonstrate
the
patch
fixes
the
vulnerability,
those
tests
depending
on
the
severity
and
how
clearly
the
test
demonstrates
how
you
exploit
the
the
bug.
Sometimes
those
tests
get
picked
into
the
public
branch
later
once
it's
been
exposed.
F
F
C
B
So
with
the
vendor
directory
and
the
dependencies,
and
we've
got
lots
of
things
out
of
date
and
many
of
the
things
we
have
in
there
aren't
even
pinned
to
release
versions.
They're
just
random
commits
between
in
between.
Are
you
working
with
zigge
architecture,
or
is
there
any
plan
to
figure
out
how
to
correct
any
of
this
problem?.