►
Description
Supply chain attacks are hard to mitigate due to the limited visibility of production environments into its software artifacts. This has resulted in an uprise of related attacks.
Website: https://www.securesystems.de/
Organized by @Microsoft @kubermatic7173 @SysEleven
Thanks to our sponsors @CapgeminiGlobal, @gardenio, @sysdig, @SUSE, @anynines, @redhat, nginx, serve-u
A
Thank
you
for
coming
to
my
talk,
yeah,
it's
about
a
container
image
signature
overview.
I
am
Phillip
Phillips.
That's
me,
that's
my
face.
I'm,
a
I.T
security
engineer
at
SSE
and
I
am
also
a
maintainer
of
The
Connoisseur,
open
source
project.
What
concert
is
You
Gonna
Learn
within
this
talk,
so
what
to
expect
from
the
stock
I'm
give
a
very
short
motivation.
A
So
a
short
motivation.
Why
are
we
doing
this?
We're
basically
trying
to
protect
against
these
supply
chain
attacks?
This
plastic
supply
chain
attacks
that
pop
up
in
the
past
years,
every
now
and
then
and
yeah.
So
here
we
have
a
example
setup.
We
have
our
code
repository
where
we
have
our
containerized
application.
This
containerized
application
is
going
to
run
through
a
cicd
pipeline
where
we're
gonna
build
our
images,
push
these
images
into
our
container
registry
and
eventually
deploy
it
into
our
cluster.
A
The
two
attack
points
We
Care
here
about
are
an
attacker
who
might
be
able
to
inject
malicious
images
into
our
container
registry
or
in
our
image
registry
and
any
attacker
who
somehow
got
access
to
our
cluster
and
can
Cube
CTL
and
run
up
random
pods
with
malicious
images.
Are
we
going
to
solve
these
problems
well
with
container
images
and
making
sure
that
only
actually
trusted
content
ever
hits
our
cluster,
and
are
we
going
to
actually
implement
this
I'm
going
to
show
now?
A
The
first
step
is:
how
do
we
actually
sign
our
images
and
before
that
we
actually
have
to
question
ourselves?
What
are
we
actually
signing?
I
mean
we're
signing
container
images.
Yes,
that's
that's
clear,
but
what
does
it
actually
mean?
What
exactly
is
a
container
which
image
a
container
which
follows
the
oci
image,
format,
specification
or
CI
standing
for
open
container
initiative,
and
this
specification
says
in
container
consists
of
this
manifest
file
as
a
Json
file,
and
these
manifest
files
reference,
a
config
file
and
a
bunch
of
image
layers.
A
The
image
tag
is
this:
mutable
mutable
human,
readable
descriptor
such
as,
for
example,
on
redis
Alpine,
the
Alpine
tag,
which
roughly
gives
you
an
idea
what
could
be
behind
the
image
or
what
the
image
is
based
on.
So
in
this
example,
this
gives
us
the
idea
that
the
redis
image
is
based
on
Alpine.
A
dbn
tag
would
give
us
the
idea
that
might
be
based
on
Debian.
A
The
point
is,
it
doesn't
actually
has
to
be
this
way,
just
because
it
says
Alpine
doesn't
mean
Alpine
has
to
be
behind
it.
It's
mutable
and
someone
could
trick
you
in
thinking
that
you're
downloading
in
Alpine
image
and
in
reality,
you're
actually
getting
a
Debian
image.
In
contrast,
there
are
the
digests.
There
are
just
immutable
and
unique
hashes
of
this
manifest
file
and
actually
correspond
to
the
actual
content
that
is
behind
your
container
image.
A
Since,
since
each
of
these
image
layers
are
referenced
by
digest
as
well,
if
any
of
the
image
layers
get
changed,
they
digest
change
if
they
digest
change
the
Manifest
file
changes
and
therefore
the
digest
of
the
Manifest
file
changes.
So
everything
is
connected
and
a
digest
gives
you
an
exact
representation
of
a
certain
status
container
images,
and
this
is
why
the
digest
always
is
in
some
other
way
used
for
any
kind
of
signature.
Tooling,
I'm
going
to
talk
about.
A
If
you're
really
interested
in
notary
version
2,
you
want
to
know
what
they're
doing
then
you
can
approach
me
like,
if
can
give
you
a
rundown,
what
they're
trying
to
the
chief
and
how
they
are
different
from
notary
ub1,
but
for
the
interest
of
time
I'm
going
to
skip
Lottery
V2
in
this
talk,
so
the
first
one
is
the
original
notary
notary
B1,
it's
one
of
the
earliest
Solutions,
and
also
known
as
Docker
content,
trust,
which
tells
you
that
notary
is
fairly
closely
connected
to
the
whole
Docker
ecosystem
and
the
docker
client,
and
the
basic
idea
of
notary
is
to
store
signage
data
in
a
external
server
called
the
notary
server.
A
A
So
how
do
we
actually
sign
pretty
much
simply
by
using
your
comment?
Docker
push
command,
with
the
exception
that
you
have
to
set
the
docker
content,
trust
environment,
be
able
to
one
and
when
you
do
that,
everything
in
the
background
happens
automatically
and
you
are
getting
a
signature,
what's
actually
happening
in
the
background.
Is
the
following
internally
you're
going
to
create
a
bunch
of
keys
that
you
then
gonna
use
to
to
sign
a
manifest
file,
which
you
also
generate
this
manifest
file?
A
It
has
a
reference
to
your
image
and,
more
importantly,
it
has
a
it:
has
a
mapping
between
the
image
tag
you're
trying
to
assign,
with
an
image
digest
that
actually
corresponds
to
the
image.
Yes,
and
all
these
manifest
files
then
get
signed
with
the
keys
you
generated
and
then
pushed
into
the
notary
server.
A
Why
are
we
doing
this?
Well,
that
gets
a
bit
more
clear
when
we
look
at
the
verification
side
of
things,
so
verification
works.
Why,
via
just
talk
a
pull
or
Docker
image,
pool
again
with
the
environment
variable
set
to
one
and
then
what
happens?
You're
gonna
request
these
manifest
files
from
the
notary
server
validate
the
signature
of
these
manifest
files
with
the
public
Keys.
You
generate
it
and
then
look
for
these
mappings
between
Tech
and
Digest.
A
A
Please
trust
me
by
verifying
the
signature
of
this
file
and
that's
the
whole
flow,
how
naturally
works
and
how
you
can
be
sure
that
when
then,
actually
taking
the
image
by
digest
that
everything
is
trusted
this
whole,
why
we
are
doing
multiple
manifest
files.
A
Have
different
types
of
keys
actually
gives
us
some
benefits,
which
are
that
we
have
some
kind
of
some
kind
of
freshness
guarantee
on
all
the
signatures,
so
signatures
can
expire
and
be
re-signed
every
now
and
then
so
we
can
be
sure
that
our
signatures
are
actually
fresh
and
up
to
date
we
have
some
kind
of
Delegation
of
roles,
so
a
different
hierarchies
of
keys
so
that
certain
keys
can
only
do
certain
things
and
other
keys
can
do
more,
and
we
also
have
some
kind
of
a
key
compromise,
a
resilience
so
should
one
of
our
keys
actually
get
stolen
or
compromised,
we
can
fairly
easy
easily
rotate
these
keys
without
much
effort.
A
These
are
the
plus
points.
The
other,
on
the
other
hand,
are
I
fairly
simplified.
All
of
this,
the
the
actual
logic
behind
it
is
really
really
complex.
A
If
you're
interested
in
it,
you
can
look
up
the
update
framework,
that's
basically
a
whole
implementation
of
the
update
framework
and
which
is
also
why
the
whole
complexity
is
the
main
reason.
Why
there's
another
version?
A
Two,
not
three
version,
one,
never
really
yeah
was
popular
enough
that
many
people
use
it
used
it,
but
nonetheless
it's
a
fairly
powerful
powerful
system
once
you
actually
know
how
to
use
it
and
yeah
there's
also
the
the
the
down
part
of
actually
needing
extra
infrastructure
if
you're,
actually
using
Hardware
as
your
image
registry,
you
can
circumvent
this
because
Harbor
comes
with
a
notary,
instant
included,
so
you
don't
have
to
worry
about
this
part
yeah.
The
other
signing
solution
is
cosine.
It's
a
much
newer
one.
A
Maybe
you've
heard
of
it.
It's
developed
by
six
store
and
it
actually
doesn't
need
any
in
additional
infrastructure.
A
Here
you
can
generate
your
own
keypad
by
using
cosine
generate
keypair,
then
you
get
a
private
public
key
and
then
already
get
going
and
do
cosine
sine
to
sign
a
given
image.
What
happens
here
is
that
you
generate
a
certain
payload
that
includes
the
digest
of
the
image
you're
trying
to
send
to
trying
to
sign
at
the
given
moment
and
then
over.
A
The
payload
actually
create
the
signature
with
the
key
and
then
put
the
payload
and
the
signature
in
a
so-called
or
not
so-called,
but
in
a
kind
of
pseudo
image,
so
you're
using
the
again
the
oci
image
format
instead
of
a
config
you,
you
put
an
empty
config
in
there
and
instead
of
the
image
layers,
you
put
your
payload
and
your
signature
in
there
and
then
push
this
pseudo
image
into
your
registry.
A
Next,
to
your
actual
image
that
you
tried
to
assign
where
to
put
the
image,
this
is
a
convention
that
you
put
it
in
the
image:
name:
Colin
the
digest
of
the
image
dot
zig.
Why
are
we
doing
this?
So
we
actually
know
where
to
look
for
the
signature.
Once
we
try
to
verify
an
image
which
is
the
next
part
we
do
cosine
verify.
We
want
to
look
for
some
signature.
A
We
can
do
that
by
looking
at
the
digest
that
our
current
image
has
then
per
convention
request
or
registry
for
the
image
column
digest.zik.
There
should
be
our
signature,
then
get
this
pseudo
image
and
in
the
pseudo
image
we
can
find
the
signature
that
we
can
validate
with
the
public
key
and
if
we
have
done
that,
we
have
this
trusted
payload
that
has
to
digest
in
it
as
well,
and
then
we
have
can
make
a
cross-reference
whether
the
image
we
have
with
the
digest
corresponds
to
the
digest.
A
A
You
can
you
have
a
key
management,
key
management
system
support
where
you
can
put
your
keys,
for
example,
in
kubernetes
secrets
you
don't
actually
actually
have
to
provision
them
on
your
local
machine,
but
in
a
kubernetes
cluster
and
then
cosine
will
access
the
kubernetes
cluster
to
actually
get
the
keys
and
then
do
the
verification
and
signing
there
is
support
for
annotating
your
signatures.
A
If
you
want
to
do
at,
for
example,
git
commit
hashes
into
your
signatures
to
get
some
extra
value
and
there's
also
keyless
support
where
you
don't
actually
have
to
use
any
keys
or
not
you're,
still
using
keys,
but
you're
using
ephemeral,
keys
that
are
generated
from
your
email
address,
and
this
email
address
then
via
an
oicd
process
is
verified.
That
is
actually
belongs
to
you.
A
Yes,
that's
cosine.
You
can
check
it
out
now.
The
question
is:
how
do
we
actually
verify
or
a
better
question
is
how
do
we
actually
verify
inside
of
kubernetes?
Okay
now
how
to
verify
with
cosine
and
how
to
verify
with
notary,
but
we
don't
know
how
we
actually
tell
kubernetes
to
do
all
these
things
for
us.
So
every
time
we
want
to
create
any
kind
of
resources,
deployment,
stateful
State
demon
said
whatever
kubernetes,
ideally
should
do
all
these
verifications
for
us
and
we
don't
have
to
worry
about
anything.
A
We
can
do
this
with
admission
controllers.
They
have
been
mentioned
a
few
times
in
other
talks.
I'm
gonna
go
through
them
again,
so
these
are
little
Services.
You
can
install
in
your
cluster
and
then
hook
up
into
the
kubernetes
API
hooking
up
into
the
Q
A's
API
means
essentially
that
every
time
you
want
to
create
any
kind
of
resources.
Here,
an
example,
a
deployment.
You
go
through
three
steps.
First,
your
the
API
server
is
going
to
authenticate.
You
then
authorize
you
and
then
a
so-called
admission
review
is
sent
to
all
your
admission
controllers.
A
There's
admission
review
contains
information
such
as
who
is
trying
to
do
what,
with
what
kind
of
spec,
so
in
this
case,
user
a
tries
to
create
a
deployment
with
the
image
res
and
based
on
this
information.
All
your
admission
controllers
can
make
certain
decisions
and
then
either
deny
the
whole
request
or
allow
it.
Then
There
are
a
differentiation
between
two
kinds
of
admission
controllers:
they're,
validating
and
mutating
admission
controllers.
The
validating
admission
controllers
just
do
the
decision.
A
Yes,
I
want
this
resource
to
be
deployed
or
no
I
don't
want
this
and
the
mutating
additionally
to
denying
or
allowing
it
can
also
mutate.
The
whole
resource
that
has
been
deployed
so
in
mutating
admission
controller
can,
for
example,
just
change
the
redis
image
to
mongodb.
A
If
it
pleases,
or
if
the
admission
controller
is
set
up
to
do
so,
I,
don't
know
why
you
would
write
such
an
admission
controller,
but
you
could
and
that's
exactly
what
we're
somehow
abusing
with
trying
to
implement
verification
into
kubernetes
we're
writing
admission
controllers
that
actually
extract
the
image
reference
inside
our
resources
and
then
uses
the
either
the
notary
verification
or
the
cosine
verification.
A
On
these
image
references
we
can
also
apply
some
policies
on
these
images,
so,
for
example,
for
certain
images
we
only
want
to
use
certain
keys
or
for
certain
images
we
want
to
use
the
one
verification
method
and,
for
others,
the
other
verification
methods.
If
you
have
like
differentiation
between
some
images
you
assign
with
notary
others
with
cosine,
you
can
do
that
and
yeah
at
the
end.
A
You
can
either
deny
or
allow
the
whole
request,
and
also,
if
you
verified
the
whole
signature,
you
can
mutate
the
image
reference
to
actually
use
the
digest
instead
of
the
image
tag.
That's
the
whole
process
of
pretty
much
all
the
admission
controllers
that
try
to
solve
this
whole
image
verification
process.
There
are
a
few
of
them.
A
I
I
listed
some
of
them
here
there
are
probably
a
lot
more.
This
is
not
a
complete
list
which
one
you
actually
use
in
the
end
depends
on
pretty
much
your
use
case.
Do
you
just
want
to
have
image
verification,
or
do
you
want
to
do
a
bit
more?
Some
of
these
admission
controllers
are
not
super
specialized
in
just
doing
a
image
verification,
and
it
also
depends
on
what
kind
of
signature
scheme
you
actually
want
to
use.
A
So
if
you
want
to
explore
connoisseur,
you
can
just
clone
the
whole
project
and
then
use
our
Helm
chart
to
install
it
into
your
cluster,
and
then
you
can
already
start
trying
to
run
certain
images
and
see
whether
connoisseur
allows
them
or
not.
We
have
in
our
policy
some
predefined
keys
in
there,
so
you
can
already
start
running
the
hello
world
image.
A
It
will
allow
it
since
the
hello
world
image
on
Docker
Hub
is
actually
signed,
and
then
you
can
try
to
run
an
unsigned
image
and
see
that
connoisseur
were
actually
deny
it,
because
there's
no
signature
information
for
this
image
once
you've
done
that
you
can
actually
explore
our
policy
and
see
if
you
want
to
write
your
own
policies,
policies
are
set
up
in
a
certain
way.
It's
separated
in
two
parts
in
the
policy
part
and
in
the
validators
part.
A
The
policy
part
consists
of
multiple
rules
that
Define
a
image
pattern,
so
that
if
you
get
an
image,
only
one
of
these
rules
in
your
policy
actually
applies,
and
then
these
rules
in
a
way
link
the
image
to
one
of
our
validators
and
the
validators
are
actually
deciding
what
signature
scheme
to
use.
So
in
this
example,
you're
gonna
match
the
image
onto
our
notary
validator,
that
does
the
whole
notary,
validation
and
there.
You
can
then
also
Define
your
public
keys
that
are
being
used
to
actually
do
develop,
validation.
A
The
same
works
for
connoisseur,
therefore,
for
for
cosine,
where
you
then
just
use
a
different
validator,
a
cosine
validator,
and
we
also
have
special
validators
that
allow
for
static,
allow
or
deny
listing.
So
if
you
have
certain
images
in
your
cluster
that
you
know
that
won't
ever
have
signatures
such
as
internal
kubernetes
Services
as
the
API
server
that
you
don't
want
to
validate,
because
you
know
they
won't
have
signatures,
and
you
don't
actually
maintain
those
images,
then
you
can
put
them
on
a
allow
list
and
thus
always
skip
the
validation
in
a
sense.
A
Yeah
here
we
actually
use
our
Helm
chart
to
install
a
connoisseur
in
a
cluster
that
was
successful.
Then
we're
gonna,
look
how
the
deployment
looks
like
we
have
three
pots
of
connoisseur,
and
then
we
already
try
to
run
a
image
in
this
case
the
hello
world
image.
A
A
A
The
image
is
denied
and
will
never
be
deployed
into
our
cluster.
That's
pretty
much
it
that's
the
gist
of
it.
How
to
do
verification
in
kubernetes
where
to
go
from
here.
If
you
want
to
have
a
closer
look
at
connoisseur,
you
can
explore
our
feature
set.
We
can
do
multiple
things.
We
allow
for
verification
of
notary,
V1
and
cosine
and
support
some
of
the
feature
that
notary
and
cosine
actually
give
us
so
for
not
to
read
the
delegation
of
different
keys.
A
So,
for
example,
you
could
Define
delegations
such
as
linting
testing
and
scanning
in
your
pipeline,
and
then
let
all
of
these
steps
give
a
signature
to
your
image
and,
at
the
end
of
when
you
actually
deploy
your
image
in
your
cluster
check,
whether
linting,
testing
and
scanning
ever
run
through
by
checking
all
three
all
three
of
these
signatures
that
can
work.
This
is
a
use
case
and
yeah
the
same
for
cosine.
We
also
have
things
like
detection
mode
when
you
actually
try
to
implement
all
of
this
at
the
first
time.
A
You
don't
want
to
block
all
your
developers
from
using
any
kind
of
images.
This
you,
you
might
might
get
a
get
in
a
little
bit
of
trouble
with
your
developers
if
they
can't
deploy
and
try
to
do
some
quick
fixes.
So
with
detection
mode,
you
can
only
activate
warning,
should
no
validation
be
possible,
but
instead
of
completely
blocking
it,
it
just
gives
out
the
warning.
A
We
also
support
alerting,
so
you
can
send
for
failed
verification
message
to
some
slack,
Channel
or
anywhere
else,
and
things
like
namespace
validation
that
you
only
allow
validation
on
certain
namespaces
and
yeah,
ignore
it
on
other
name
spaces,
and
also,
if
you
want
to
join
us,
we
are
a
Community
Driven
project.
You
can
really
help
us
out
to
get
more
features
into
connoisseur
and
just
yeah
help
support
the
whole
ecosystem
of
image.
Signatures
that'd
be
great
and
that's
pretty
much.