►
From YouTube: Introduction to Keylime with Axel Simon and Luke Hinds (Red Hat) | OpenShift Commons Briefing
Description
Keylime is a CNCF hosted project that provides a highly scaleable remote boot attestation and runtime integrity measurement solution. Keylime enables users to monitor remote nodes using a hardware based cryptographic root of trust. Keylime was originally born out of the security research team in MIT’s Lincoln Laboratory. https://keylime.dev
OpenShift Commons Briefing
Nov 9th, 2020
Guest Speakers: Luke Hinds and axel simon (Red hat)
Host: Diane Mueller (Red Hat)
A
All
right,
everybody
happy
monday,
welcome
to
another
openshift
commons
briefing
as
we
like
to
do
on
mondays.
We
like
to
have
upstream
projects
talk
about
where
they're
at
right
now
and
new
initiatives.
So
if
you
have
one
reach
out,
let
me
know
and
we'll
give
you
the
podium
and
today
we're
giving
the
podium
to
a
new
cncf
sandbox
project
called
key
lime
that
I
know
this
much
about
so
very
little.
A
B
Sure,
I
guess
I'll
start
quickly,
so
my
name's
axel
simon,
I'm
part
of
the
red
hat
office
of
the
cto.
I
work
with
luke
in
the
security
team
of
the
emerging
technologies
departments,
so
we're
basically
focusing
on
all
the
new
technologies,
that's
going
to
shape,
what's
happening
in
the
next
in
the
next
couple
years
to
to
a
horizon
a
bit
beyond
that
we
extend
sort
of
our
thought
to
sort
of
five
and
then
even
maybe
10
years,
as
we
sort
of
try
and
really
take
the
long
view.
B
But
most
of
the
stuff
we
look
at
is
more
on
the
horizon
of
a
couple
years,
and
I've
been
working
on
on
a
few
open
source
projects
that
are
securely
focused,
keylime
being
one
of
them.
Prior
to
that,
I
was
doing
quite
a
lot
of
work
on
blockchains
and
it's
not
entirely
irrelevant
here,
because
both
have
things
to
do
with
distributed
systems
and
how
you
have
multiple
systems
and
you
try
and
sort
of
maintain
an
integrity
of
all
of
them.
B
C
B
Right
so
I'll
I'll
introduce
you
all
to
key
lime,
and
you
may
be
wondering
what
what
is
key
lime,
except
for
you
know,
beyond
a
cool
logo
and
a
nice
name,
so
it
all
comes
from
a
research
paper
in
the
beginning
of
2016
bootstrapping
and
maintaining
trust
in
the
cloud.
B
That's
an
issue
that
you
might
have
run
into
it's
hard
to
know
what
state
the
machine
you
boot
in
the
cloud
is
in
really,
if
you
don't
have
anything
to
base
it
off
from
that
is
you
may
be
told
that
this
machine
is
running,
I
don't
know,
say
centos
7,
but
it's
hard
to
know
exactly
what
it's
running,
and
so
you
need
a
way
to
bootstrap
confidence
in
that
state
of
the
machine,
and
this
is
what
the
research
paper
fundamentally
is
about.
It
was.
B
It
was
written
by
nabil
and
charles
at
mit
and
so
later
on
that
same
year
in
2016
they
they
came
up
with
a
prototype
which
would
become
key
lime
and
over
time
that
that
kept
moving
forward
and
eventually
in
2018,
moved
all
to
github
and
got
a
community
starting
around
it.
B
I
think
luke
started
participating
around
that
time,
maybe
a
bit
earlier,
not
sure,
but
anyway,
basically
the
project
really
gets
started
and
goes
from
the
prototype
to
to
an
open
source
community
project
and
very
recently
about
a
month
ago,
thanks
to
luke's
efforts,
the
q
line
was
accepted
as
a
cncf
sandbox
project.
So
we
now
are
part
of
the
cloud
native
foundation
which
it's
interesting,
because
kiln
really
much
very
much
is
dedicated
to
the
idea
of
you
know
multiple
nodes
and
how
you
you're
making
trust
in
that.
B
So
so,
what
exactly
does
key
lime
do?
Well,
keyline
tries
to
provide
three
main
things.
The
first
of
it
is
remote,
remote
attestation,
so
that's
the
capacity
to
check
without
being
at
the
actual
computer.
That
is
running
it
that
it
is
in
a
state,
you
believe
in
a
state
you
can
check.
So
you
want
to
attest
from
afar
remotely,
obviously
that
the
machine
is
in
the
state.
You
you
think
it
is,
and
to
do
that
we
do.
We
use
two
things.
B
We
can
measure
the
boot
to
check
what
it
boots
into
and
then
we
can
measure
the
runtime
using
this
linux
subsystem
called
ima
and
we'll
get
back
to
that
a
bit
later,
but
that's
the
first
part
remotely
checking
that
a
node
is
is
in
a
state.
You
you
checking
on
the
state
of
it
remotely.
B
The
second
one
is
encrypted
payloads
it's
once
you
can
check
that
the
node
is
in
a
trustworthy
state,
then
you
can
send
it
payloads
that
are
encrypted
and
that
it
can
decrypt
and
that
can
be
used
for
several
things.
But
basically
you
can
bootstrap
your
node
and
give
it
extra
information,
including
secrets,
and
that's
that's
very
useful
in
this
day
and
age.
There's
always
secret
secrets
to
maintain
to
make
sorry
to
manage,
and
this
is
this
enables
you
to
do
that.
B
And
lastly,
we
have
a
revocation
framework
which
makes
which
enables
you
to
to
manage
with
the
failure
of
a
node.
So
if
a
node
no
longer
is
in
a
state,
you
like
you,
can
fail
that
node
and
we've
got
a
framework
around
that
to
take
several
action.
So
the
three
work
together
but
they're
all
based
on
one
fundamental
root
of
trust,
which
is
a
tpm
the
tpm
for
those
who
might
not
know,
is
the
trusted
platform
module.
B
It's
a
chip,
that's
found
on
the
vast
vast
majority
of
modern
computers
and
essentially
all
servers
lots
of
laptops.
Have
them
too.
You
can
even
get
one
for
your
raspberry
pi.
If
you
want
to
essentially
it's
a
chip
that
is
capable
of
doing
some
simple
fundamental
cryptographic
operations
and
one
of
them
is
measuring
the
measuring
different
aspects
of
the
system
as
it
boots,
and
we
use
that
extensively
for
key
lime
to
to
be
able
to
check
remotely
the
state
of
the
of
the
system.
B
So
let's
look
a
bit
more.
What
the
key
lime
architecture
looks
like,
so
we've
got
two
sides
here:
one
of
them
is
the
is
the
node
on
the
left,
the
machine
you
are
actually
trying
to
check
and
on
which
we
run
an
agent,
which
is
a
key
lime
agent.
You
can
see
that
keylime
agent
connects
to
the
tpm
or
the
virtual
tpm,
we'll
get
more
into
that
later,
but
basically,
for
now
it's
just
tpms
and
that
can
run
in
a
container
in
a
virtual
machine
directly
on
the
machine
we
can.
B
All
of
those
things
are
all
the
use.
Cases
are,
are
possible
and,
and
it
will
communicate
over
a
network,
the
keyline
verifier,
but
the
keyline
verifies
the
one
that
actually
checks
the
integrity
of
the
node
on
which
the
key
lime
agent
runs,
the
qlm
agent
just
sends
it
quotes
and
the
keyline
verifier
will
check
those
quotes.
B
B
Some
of
you
might
have
picked
up
on
the
fact
that
in
the
middle,
our
network
doesn't
have
to
be
trusted.
So
often
these
days.
Every
time
we
do
something
that
is
security
related,
we'll
try
to
always
be
using
a
tls
encrypted
connection.
In
this
case,
it's
not
strictly
necessary.
It
may
be
desirable,
but
it's
not
necessary
because
the
key
lime
agent
doesn't
do
anything
really
but
well,
it
does
a
lot
of
things.
B
Obviously,
but
fundamentally
what
it
does
is
make
available
a
quote
from
the
tpm
and
the
tpm,
the
the
tpm's
quote
is
cryptographically
signed
and
nothing
else
on
the
system
is
able
to
to
forge
that
signature.
So
if
the
signature
got
modified
along
the
way
on
the
untrusted
network,
that
would
be
immediately
visible.
So
basically,
we
have
some
protection
in
the
capacity
of
the
tpm
to
to
sign,
cryptographically
valid
quotes,
and
so
we
don't
necessarily
need
a
trusted
network.
Having
one
can
be
desirable
again,
maybe
to
protect.
B
You
know
against
some
other
failures,
but
but
it's
not
necessary,
which,
which
is
an
interesting
little
extra
aspect
of
key
lime,
but
so
fundamentally
those
three
aspects:
the
agent
on
the
node,
the
verifier,
on
your
machine
from
which
you
are
trying
to
verify
things,
the
registrar,
to
sort
to
store
all
the
information
relative
to
your
node
and
your
nodes,
usually
because
you'll
have
several.
B
How
does
remote
attestation
work?
Well,
it's
as
I
started
describing
it
previously,
but
it's
really
quite
basic
you
you!
So
you
you
request
at
the
station
before
you,
you
send
your
workload,
you
ask
the
verifier.
Can
you
please
check
this
node
the
verifier
talks
to
the
agent
which
requests
a
quote
from
the
tpm
and
then
sends
this
quote
back
to
the
verifier.
Now
you
have
two
possibilities:
either
the
quote
is
validated
everything's.
B
Okay,
your
node
has
not
been
compromised,
has
not
changed
it's
in
a
state
that
you
believed
to
be
good
and
you
and
you
were
okay
with
that
state
and
then
automatically
the
the
verifier
might
send
to
the
agent
an
encrypted
payload
and
it
can
run
automatically.
Otherwise,
if
it
fails
its
validation,
then
you
will
get
a
revocation
event
and
the
node
on
which
the
agent
is
running
here
will
be
cordoned
off
and
will
be
removed
from
the
group.
B
Let's
go
a
bit
more
into
the
the
idea
of
running
encrypted
payloads.
So
once
the
machine
passes
at
a
station
with
the
verifier,
then
we
can
send
it
back
the
encrypted
payload,
which
will
give
it
access
to
some
secrets.
We
have
a
little
example
on
the
right
here
where
we
have
like
some
secrets
like
we
have
a
password
and
we
have
some
local
actions.
We
want
to
take
that,
for
instance,
will
only
be
executed
if
the
machine
passes
its
its
attestation.
B
In
this
case,
it'll
it'll
receive
the
payload
it'll
have
what
it
needs
to
decrypt
it
and
then
it'll
start
running
the
actions
inside
the
payload.
The
payload.
The
the
protocol
for
for
exchanging
the
secrets
is,
is
a
three-part
key
deviation,
derivation
protocol,
I
think,
but
mistake
there.
Luke
might
be
able
to
explain
a
bit,
and
I
am
you
know,
don't
don't
push
me
on
that
one,
I'm
not
quite
clear
on
it
exactly
enough
yet,
but
but
it's
pretty
cool,
it
basically
means
you
can
have
a
you.
B
Can
you
can
ship
a
node,
for
instance,
with
secret
honor
that
it
can't
read
because
it
doesn't
yet
have
the
keys
and
later
on
reveal
the
keys
to
it,
so
they
can
read
stuff.
So
you
can
you
can
embed
secrets
inference
it's
a
master
image
that
you
will
push
onto
all
your
nodes
and
yet
be
sure
that
bearing
being
able
to
break
modern
cryptography,
the
node
won't
have
access
to
the
secrets
until
you
decide
that
it
is
okay
for
it
to
have
access.
B
We
mentioned
prior
also
that
we
were
able
to
do
runtime
monitoring
so
not
just
checking
that
the
system
boots
into
a
good
state,
but
that
the
system
remains
in
a
good
state
over
time.
You
can
basically
think
of
this
as
like
a
tripwire.
If
anything
changes
on
the
system,
it
will
trip
the
tripwire
and
we
will
have
an
event
telling
us
of
that.
So
for
that
we
use
the
integrity
measurement
architecture,
which
is
a
linux
security
subsystem.
B
Every
cisco
is
measured
and
extended
into
the
tpm,
but
this
is
done
asynchronously.
So
it's
it's
not
blocking.
It
doesn't
slow
down
the
system
and
then
the
the
state
is
compared
remotely
with
what
is
expected
and
if
there's
a
problem,
then
we
can
fail
the
node.
So,
for
instance,
if
somebody
executes
a
script
that
wasn't
planned,
wasn't
supposed
to
be
executed
on
the
node,
then
that
will
trip
the
ima
monitoring
and
keyline
will
be
able
to
set
off
an
event,
and
you
can
make
decisions
on
that
again
here.
B
We
have
this
idea
of
using
the
tpm
quotes
to
check
that
that
cannot
be
fabricated
to
to
to
use
that
as
a
protection
against
basically
say
a
system
will
be
taken
completely
over
and
would
start
sending
fake
quotes.
In
this
case,
it
shouldn't
be
able
to
do
that,
because
the
tpm
it
won't
be
able
to
get
the
real.
We
won't
be
able
to
fake
the
quotes
from
the
tpm
compared
to
what
we're
expecting
remotely,
because
we
have
our
copy
remotely.
B
B
Well,
let's
say,
for
instance,
that
we
realize
there's
a
there's,
an
event
on
node
c
and
there's
a
problem,
and
we
want
a
fail
note
c
well,
so
what
we
might
do,
for
instance,
is
revoke
node
c
certificate
with
our
certificate
authority
and
then
send
this
revocated
event
to
all
all
the
other
nodes,
and
this
is
basically
what
we
can
do
with
key
lime,
which
is
once
node
c
is
compromised.
We
cannot
trust
it
anymore
to
take
any
action
properly.
B
We
have
to
assume
that
it's
it's
dead
and
gone
and
that
we're
not
going
to
be
able
to
get
anything
out
of
it,
and
so
all
our
actions
are
basically
going
to
be
about
coordinating
off
node
c
and
modifying
the
behavior
of
all
the
other
nodes.
So
you
really
have
to
think
about
it.
B
That
way-
and
that's
really
the
main,
the
main
idea-
and
so
here
the
revocation
events
can
be
what
we
just
mentioned
here,
for
instance
like
removing
revoking
note
c
certificate,
but
you
could
also
do
things
like
remove
from
ssh
authorized
keys
or
cordon
and
drain
the
node
using
kubernetes,
or
you
know,
shutdown,
vpn
access.
I
have
the
other
nodes,
remove
it
from
from
their
vpn
peers,
or
you
know,
adding
or
removing
iptable
firewall
rules.
B
So
all
those
types
of
actions
are
possible
and
we're
working
on
sort
of
creating
a
collection
of
those
rules
that
will
be
easily
usable
by
everybody.
What
so,
let's?
Let's
move
into
current
work
on
on
key
lime,
so
the
agent,
the
key
lime
agent,
is
currently
in
python,
it's
being
ported
to
rust
and
work
is
as
good
as
underway
on
that
and
it's
moving
forward.
So
for
those
who
are
interested
in
why
we're
using
rust?
B
Well,
it's
a
low
level
performance
for
quite
performance
systems,
language
and
it
has
been
designed
with
security
in
mind,
which
fits
keylime
pretty
well,
and
also
we
have
another
issue.
Is
that
python
in
the
current
setup?
Will
it
ends
up
pulling
quite
a
lot
of
dependencies
using
pip,
and
this
is
not
always
an
option,
especially
for
systems
that
are
immutable
like
core
s
core?
Is
that
that's
not
quite
possible?
So
we
would
be
interested
in
moving
something
else
for
that
and
once
it's
done,
our
default
agent
will
be
the
rust
agent.
B
Other
work
we're
engaging
in
is
an
ima,
so
the
integrity
measurement
architecture
can
can
also
be
extended
to
you
to
do
namespaces,
which
are
used
very
much
by
containers.
So
once
we
have
that
in
place,
we'll
be
able
to
do.
Measurement
inside
containers
will
also
be
an
interesting
positive
development.
B
Lastly,
the
work
for
the
future.
We
have
some
on
on
vtpn,
so
virtual
tpms,
that's
if
from
a
purely
security
standpoint.
Currently,
a
virtual
tpm
is
not
very
interesting
because
it's
not
based
on
any
hardware,
so
it
means
that
it
can.
It
can
provide
fake
quotes
and
security.
Wise.
B
That's
pretty
useless,
however,
for
testing
reasons
it's
already
interesting
to
have
it,
but
in
the
future,
what
we'd
really
like
is
to
have
what
is
called
nested
quotes,
which
is
where
a
virtual
tpm
that
is
inside
a
container
is
out,
but
I
had
another
slide
after
this
is
actually
based
on
a
on
a
hardware
tpm,
so
it
actually
the
quotes.
The
virtual
tpm
gets
actually
based
on
the
quotes
from
the
physical
tpm
and
so
using
that
chain.
We
would
then
be
able
to
have
virtual
tpms
inside
containers.
That
would
still
be
useful
wise.
B
So
that's
some
of
the
next
one
of
the
next
things
we're
working
on
right.
So
beyond
all
this
technical
stuff,
we
also
have
a
quite
a
nice
community
working
on
the
project.
For
a
start,
it's
multi-vendor.
So
that's
always
really
nice.
We
have
people
from
red
hat,
as
you
know,
but
also
from
mit
some
people
at
ibm,
people
on
netflix
and
zte,
some
independent
contributors
who
are
also
working
on
a
project,
and
we
also
don't
have
just
developers.
B
We
also
have
other
people
working
on
ux
working
on
outreach
and
everything.
So
that's
really
quite
nice.
The
community's
friendly
we
need,
we
have
a
slack
room
on
the
cncf
slack.
Everybody's
very
welcome
to
join.
Ask
questions.
Take
you
know
key
lime
for
a
test
spin
and
see
how
it
grows.
We
also
have
a
lot
of
automated
testing.
We
have,
we
do
code,
quality
assessments
and
we
try
and
be
pretty
supportive
of
new
contributors.
There's
a
guide
and
there's
a
lot
of
help
available.
B
A
So
thanks
for
that-
and
I
think
and
and
thank
you
luke
also
for
joining
us-
the
the
work
that
you
guys
are
doing
to
port
from
python
to
rust,
what
what
is
the
like?
Currently
you
should
be.
Are
you?
Where
are
you
testing
currently?
So,
if
you,
if
you're
running
with
kubernetes,
if
you're,
are
you
not
able
to
run
tests
now
on
rel
core
os
or
is
that
a
is
it
just
a
lot
of
dependencies
and
that's
why
you're
moving
off.
C
Yeah
sure
so
so
a
core
os.
It
has
a
read-only
nature.
Okay,
that's
not
to
say
you
can't
use
rpm,
os
tree
and
so
forth,
but
they
also
have
a
stripped-down
version
of
python.
Okay,
I
can't
believe
the
actual
name.
I
think
system
python
and
currently
the
python
agent
has
a
big
list
of
dependencies
that
are
pulled
in
okay
and
with
rust
it's
statically
linked.
So
when
you
compile
it
all
of
your
dependencies
are
in
a
single
blob.
C
Okay,
so
that
just
means
it's
less
disruptive
to
an
os
tree
like
operating
system
just
to
have
a
a
single
binary
less
tree.
So
that's
one
of
the
reasons
that
it
makes
a
implementation
more
conducive
to
a
container
operating
system
like
fedora
core
s,
red
hat
core
ice,
and
it
was
actually
the
fedora
os
community
that
that
was
encouraging
us
to
do
this
work
as
well.
C
So
there's
that's
the
one
aspect
is
we
don't
have
a
big
pool
of
dependencies
to
pull
in
and,
secondly,
the
rust
client
is
because
it's
a
low
level
language
we
can
be
a
bit
more
less
resource-hungry,
it
could
be.
The
performance
is
arguably
better.
I
would
say,
and,
and
then
the
security
not
to
make
a
statement
that
python
is
not
secure
but
because
of
rust's
strict
adherence
to
dope
and
ownership.
C
A
But
I
I
guess,
and
that's
good
and
I
and
I
have
a
vested
interest
my
twitter
handle
is
python
dj,
so
I'm
just
showing
my
bias
here.
However,
I.
C
A
C
So
so,
with
keyline,
you've
got
a
trinity
of
systems.
You
have
the
agent
which
runs
on
the
machine
that
you
want
to
measure.
Okay,
that's
remote
to
you!
So
you're
performing
a
remote
attestation.
We've
got
two
services,
the
verifier
and
the
register
an
integral
part.
Those
tend
to
be
a
little
bit
more
on
premise
and
those
are
all
developed
in
python
and
they
will
remain
in
python
for
the
foreseeable.
C
A
So
I
guess
and
pardon
my
naivety
sometimes
in
these
things,
so
you
you're
using
this-
you
mentioned
earlier-
that
mit
and
ibm
and
netflix
and
all
of
these
folks
were
were
participating
in
this.
Where
are
you
at
in
terms
of
being
production
ready?
I
know
this
is
sandbox,
so
I
know
that's
a
you
know
a
leading
question,
but
what
is
sort
of
the
status
of
it.
C
So
we're
so
as
it
relates
to
openshift
we're
working
on
a
developer
preview
and
that
will
be
coming
the
end
of
this
quarter,
and
so
this
is
deeper
integration
with
fedora
core
s
and
then
that
will
naturally
percolate
to
openshift
as
well.
So
what
we're
doing
initially
is
looking
at
securing
the
infrastructure,
but
when
you
deploy
your
your
workers
and
so
forth,
your
openshift
cluster,
it
will
ensure
that
it's
deployed
to
an
infra
that
has
the
the
expected
state
and
nobody's
tampered
with
that
environment.
C
So
so
we're
looking
at
a
developer
preview.
The
end
of
this
quarter,
then
we'll
move,
hopefully
move
to
a
tech,
preview
and
ga
and
a
possible
date
is
this?
Don't
hold
me
to
this
is:
is
sort
of
fall,
21.,
okay
and
so
initially
we're
looking
to
establish
trust
at
station
for
the
infrastructure.
A
So
in
this
this
again
I'm
wearing
my
okd
working
group.
So
when
so,
if
we
get
four
six
out
the
door,
four
seven
out
the
door,
will
it
be
testable
with
okd,
which
is
running
on
fedora
core
os
in
the
not
too
distant
future?
So
where,
where
are
your
pocs
going
right
now?
Are
they
running
on
vanilla,
kubernetes
and
on
what
underlying
immutable
is
it.
C
So
so,
at
the
moment,
it's
with
it's
just
fedora
core
os,
so
what's
happening
is
some
folks
from
fedora
core
os
are
working
on
a
change
to
introduce
so
so
key
line
requires
measurements
of
file.
So
a
measurement
is
a
sha-256
digest
of
a
file
okay
and
then
what
happens?
Is
those
digests
cryptographically
signed
and
they're
sent
from
the
agent
to
the
verifier,
and
the
verifier
will
then
make
a
comparison
between
what
is
the
state
on
the
machine
and
what
is
the
expected
state?
C
If
there's
a
change,
you
know
somebody's
tampered
with
it.
So
if
it's,
for
example,
we're
measuring
s
bin
ip
tables,
okay,
that
has
that
has
a
hash
of
xyz
on
the
verifier,
which
is
not
on
the
target
machine.
This
is
on
premise
that
expects
the
file
state
to
be
abc,
so
obviously
there's
a
there's,
a
discrepancy
suggesting
that
somebody's
tampered
with
that
binary.
Perhaps
they've
trojanized
it
okay.
C
C
Okay,
that
list
will
then
be
signed
and
then
key
line
when
you,
when
you
run
key
lime,
you
can
tell
which
version
of
os
tree
you
want
to
measure
elim
will
make
a
call
in
retrieve
the
list
for
that
particular
release
of
os
tree
perform
like
a
gpg,
verification
to
make
sure
it's
signed
and
so
forth,
and
then
it
will
then
send
it
to
the
verifier,
who
will
then
measure
the
the
target
node,
where
our
workload
is
running,
to
make
sure
that
it
has
that
exact
version
of
os
tree
that's
running,
and
then
at
that
juncture.
C
Once
we
have
that
proof
of
concept
in
place,
then
we'll
look
at
how
this
can
be
leveraged
by,
for
example,
of
open
shift
kubernetes
to
then
be
part
of,
for
example,
somebody
might
have
an
application
which
they're
going
to
run
in
a
container
and
they're
going
to
run
it
on
somebody
else's
machine
effectively.
C
We've
we've
also
done
some
demos
where
we
had
a
demo
recently,
where
we
had
two
worker
notes:
okay
and
a
controller
and
a
pod
running
on
one
of
the
worker
nodes.
C
We
we
hack
this
comp,
this
worker
node,
okay,
that
hack
was
instantly
picked
up
by
key
lime,
who
they
made
a
call
into
the
controller
to
cordon
and
drain
the
pod
from
the
compromised
worker
node
onto
a
known
good
worker,
node.
And
then
the
kind
of
the
the
good
bit
about
the
demo
was.
C
It
was
a
seamless
experience
for
the
for
the
application
owner,
see
their
pod
migrate
across
from
a
compromised
node
to
it
to
a
known
good
node,
so
that
that's
the
sort
of
the
cool
thing
that
you
can
do
with
key
lime
is,
you
can
measure
a
machine
but
then,
as
soon
as
a
machine
fails,
you
can
tell
other
machines,
controllers
and
so
forth
to
effectively
shut
down
and
ring
fence
compromise
machine
and
migrate.
Your
workloads
to
a
machine
which
is
still
showing
that
it's
tamper-free.
A
So
how
do
you
so
just
I'm
getting
my
head
around
it
all
right,
it's
a
bit!
It's
it's
a
bit
low
level
for
what
diane
normally
works
on
and
I'm
really
happy
you're
working
on
it
because
it
sounds
like
we
need
it,
especially
with
you
can
demo
hack
and
a
worker
note.
That's
that's,
probably
not
not
the
best
thing
to
know
about
for
me.
But
how
is
this
going
to?
A
A
C
Sure
so
so
you're
very
much
seeing
a
a
work
in
progress
here.
Those
are
discussions
that
are
happening
at
the
moment
so
with
keyline.
This
is
so
it's
in
the
office
of
it's
in
the
cto
office.
It's
sort
of
what
we
consider
emerging
tech
at
the
moment,
so
at
the
moment
we're
we're
talking
to
lots
of
folks
around
where
key
line
will
be
situated
within
the
different
technologies.
C
So
my
guess
is
that
keylime
will
be
quite
early
in
the
process
of
the
cluster
being
deployed.
Okay,
because
it
needs
to
measure
the
infrastructure
sound
and
then,
when
it
comes
to
keylime
continuously
monitoring
and
how
that's
rendered
onto
a
dashboard,
that's
something
that
we
still
need
to
to
to
work
out.
So
I
can't
see
it
being
a
challenge.
It's
just
getting
consensus
around.
How
do
we
do
that.
B
C
Yeah
very
much
yeah
and
there's
lots
of
considerations,
because,
because
with
keyline
you
saying
that
you
trust
a
system
is
based
on
what
we
call
a
hardware
route
of
trust.
So
you
have
this
trusted
platform:
module,
tpm,
okay
and
and
the
tpm
is
almost
like
a
a
very
simple
version
of
open
ssl
it
can,
it
can
create
keys
and
it
can
sign
things
and
that
sort
of
signs
these
measurements
within
the
tpm.
C
C
A
So
this
is
because,
because
of
where
it
work,
where
this
level
of
verification
is
going
on,
it
has
a
lot
of
implications
for
edge
and
iot.
I
would
suspect,
as.
C
Well,
very
much
yeah,
very
much
so
a
big
thing
behind
keyline
is
is
a
big
push
because
of
edge
and
iot.
So
when
we
showed
this
solution
to
the
linux
security
summit
and
the
edge
and
iot
summit,
there
was
a
lot
of
interest
around
the
project
there,
because
it's
incredibly
good
for.
C
Let
me
rephrase
that
incredibly
good
too
incredibly
suited
for
machines
that
are
physically
in
locations
that
can
easily
be
tampered
with.
So,
for
example,
if
somebody's
got
a
an
iot
device
which
is
in
the
roof
of
a
building
somewhere
and
it's
it's
it's
hard
to
sort
of
protect
that
machine
compared
to
when
it's
in
a
big
data
center
with
a
big
security
guard
on
the
door,
you
know
checking
badges
and
so
forth.
C
So
one
there
was
somebody
that
used
key
lime
in
the
raspberry
pi
community
because
they
had
a
camera
on
their
garage
door
which
read
their
number
plate
using
machine
learning
and
then,
if
it
picked
up
their
number
plate,
it
made
a
logic
control
signal
to
the
to
the
automated
door
mechanism
to
raise
the
garage
door.
Okay
and
they
use
key
lime
to
protect
that
raspberry
eye.
C
A
Yeah,
so
the
other
the
other
pieces.
You
also
mentioned
that
mass
open
cloud
was
was
participating
in
this
and
doing
the
poc.
So
are
they
using
it?
The
poc.
C
No
they're
using
they're
using
keyline,
so
what
they
use
key
lime
for
is,
if
somebody
owns
a
machine
and
they
give
it
back
and
they
want
to
give
it
to
another
person,
they
don't
want
to.
They
had
this
use
case
that
was
particular
to
them,
where
they
didn't
want
to
entirely
reinstall
the
whole
operating
system
and
the
hypervisor
and
everything
again,
okay,
so
what
they
do
is
they
instead
use
key
lime
to
make
sure
that
the
person's
not
compromised
the
machine
with
something
nasty
or
they
release
it,
to
go
to
someone
else.
A
C
A
Sits
on
the
edge
of
is
this
a
cloud
native
project,
or
is
this
a
just
a
damn
fine
security
thing
that
we
should
also.
C
A
C
Yeah
very
much
so
when
we
originally
spoke
to
the
linux
foundation,
that
was
the
question
was,
was
you
know
we
were
thinking
we
could
put
this
in
lf
edge,
it
could
be
in
could
be
in
cncf.
It
could
be
in.
You
know
it
could
be
its
own
project
as
such,
so
we
landed
on
cncf
just
because
we
were
doing
a
lot
of
our
work
around
kubernetes
initially,
but
there's
there's
you
know
this
really
is
conducive
to
the
wedge
and
iot
as
well.
A
Yeah,
so
if
you
wanted
to
get,
I'm
gonna
make
you
share
axel.
Maybe
your
screen
one
more
time
and
go
to
the
key
lime
landing
page,
because
it's
a
different
has
a
different
extension
than
a
lot
of
the
other
ones.
I
think,
because
key
lime.
C
A
Dot
dev
and
maybe
go
to
when
you're
coming
in
community.
A
Yeah
see
if
you
can
share
that
and
because
that
would
be
good
just
for
people
to
see
you
know
where
you're
at
because
that
took
me
a
minute
or
two.
I
think
I
got
somebody
else's
key
lime
recipe
page
the
first
time
I
googled
you
all
and
not
that
I
cook,
but
you
know
it
looked
good.
This
looks
better.
Okay,.
B
Yes,
I
have
to
be
honest
with
you:
I've
never
I've
never
actually
baked
a
key
lime,
but
let
me
know
if
you
can
see
this
page.
A
I
can
indeed
and
okay.
A
Yeah,
that's
good
to
know,
and
if
people
want
to
where,
how
do
they
find
out
when
your
community
meetings
are
happening?
Where
is
that
schedule.
B
A
Perfect
all
right
cool-
and
I-
and
I
would
tell
you
also
when,
when
we
get
the
updates
to
fedora
core
os
for
this
to
all
work,
I
would
love
you
guys
to
come
to
the
okd
working
group
meeting,
which
is
on
tuesdays
and
come
and
talk
about
it
with
that,
because
there's
been
some
conversations
between
the
fedora
iot
fedora
core
os
and
the
okd
working
group
about
using
okd
on
the
edge-
it's
not
there
yet,
and
we
don't
have
really
the
resources
beyond
getting
our
releases
out
right
now,
but
there's
been
there's
actually
quite
a
few
people
there
who
are
interested
in
this
space.
A
That
probably
could
help
test
it
for
you,
especially
with
okd
being
running
on
fedora
core
os.
I
think
that
might
give
you
a
first
test
bed
for
openshift
that
might
help
and
that
I'd
be
thrilled
to
see
that
see.
That
collaboration
happen
between
the
two
work
or
three
working
groups.
You
know
key
lime,
fedora
core
os
and
okd.
A
That
might
be
a
a
great
breeding
ground
for
some
more
contributors
to
this
project.
So
hopefully
that
so
what
else
should
I
be
asking
that
I'm
not
asking?
I
I
you
know
what
what's
the
thing
you
you
haven't.
You
stumped
me
because
now
I
have
to
go
out
and
play
with
this
and
and
watch
you
guys
grow
this
community.
But
what?
What
is
it
that
that
I
should
have
asked
that
I
haven't
asked.
B
One
of
the
concerns
we
often
get
is,
but
so
how
many
nodes
can
you
have
like?
Can
you
get
10?
Can
you
get
100?
I
mean
that's,
usually
the
one,
a
question
that
comes
up
quite
fast,
so
currently
we
know
that
it
can
scale
up
to
thousands.
So
with
one
verifier,
you
can
check
a
thousand
several
thousand
machines
and
I
think
luke.
You
think
it
can
go
quite
a
bit
further
from
what
we
have
as
info.
So
that's
one
of
the
questions
we
often
get.
C
The
other
thing
as
well
with
keyline
you
get
the
impression
that
it
might
be
all
these
complex
protocols
and
raw
network
connections.
It's
not
everything
talks
over
a
rest
api,
so
so
all
of
these
services
and
the
agent.
It's
all
plain
rest,
so
that's
the
the
the
only
sort
of
slightly
arcane
part
is
where
we
talk
to
the
hardware,
but
the
rest
of
it
is
very
much
a
kind
of
a
modern
approach
to
developing
a
web
service.
C
Looking
at
you
know,
integrating
with
other
projects
as
well
around
bringing
key
lime
being
easier
to
authenticate
with
key
linemen
single
sign.
C
To
mention
there
is,
if
you
go
to
our
home
page,
there's
a
demo,
and
this
is
key
lime
protecting
a
a
three
node
xcd
cluster.
So
the
first
five
minutes
are
on
me
sort
of
talking
about
the
project,
but
the
second
five
minutes
you'll
see.
There's
some
terminals
you'll
see
the
actual
solution
working
there.
So
what
we
do
is
we
we
compromise
one
of
the
scd
nodes,
okay
and
then
the
it's
removed
from
the
cluster,
and
we
delete
some
ssh
keys.
C
All
right
so
I
mean
that
that's
what
yeah
that's
one
of
the
good
things
with
key
lime.
Is
this
revocation
framework
you
can
anything
that
you
can
dream
up
of
writing
in
python.
Keyline
will
run
for
you.
So,
for
example,
if
a
machine
fails
you
want,
you
might
want
all
of
the
other
machines
to
update
a
an
iptables
rule.
You
just
write
a
simple
python
script
in
ip
tables
and
then
keyline
will
securely
transfer
that
to
the
machines
and
it'll
be
securely
run
on
those
machines.
C
A
I'm
going
to
like
anybody,
who's
a
cloud
hosting
provider,
who's
supplying
servers,
and
you
know
gpus
and
hpc
machines
and
needs
you
know
for
secured.
You
know
compliant
systems,
there's
going
to
be
a
lot
of
interesting
use,
cases
that
come
up
in
the
next
little
while
so
it'll
be
interesting
to
see
how
this
plays
out
and
where
you
know,
I'm
glad
it's
in
the
cloud
native
computing
foundation.
Frankly,
because
I
probably
would.
B
A
Have
heard
about
it
until
it
surfaced
somewhere
in
openshift
and
upcoming
releases,
but
that
would
be
an
interesting
use
case.
I'm
curious
to
see
you
know
when
people
ask
for
this
kind
of
attestation
from
their
cloud
hosting
providers,
I
could
see,
see
them
saying
yeah.
This
is
really
running
whatever
you
know,
fedora
core
os
blah
blah
blah
this
version,
or
it's
running
you
know
rel
core
os
or
it's
running
whatever
other
immutable
operating
system.
A
This
is
it's
really,
I
think,
an
integral
part
of
the
puzzle
for
people
to
really
trust
kubernetes
at
a
high
scale
and
to
get
into
those
high
security
customers
or
end
users
scenarios
as
well.
So
that's,
that's
always
been
an
interesting
aspect
of
kubernetes.
B
There's
a
very
interesting
aspect
of
key
lime,
which
is
to
sort
of
move
the
root
of
trust
away
from
basically
just
the
sort
of
the
social
trust
you
have
in
your
cloud
provider
and
the
promise
that
they're,
not
you
know,
going
to
mess
things
up
in
the
background
and
moving
that
to
actual
hardware
route
of
trust
is
which
is
based
in
silicon,
which
is
a
different
kind
of
type
of
trust.
But
for
some
cases
it's
much
more
useful
or
it's.
A
Yeah,
it's
kind
of
interesting-
I
mean
we've
just
done.
I
should
be
should
think
that
the
hardware
providers
would
be
very
interested
in
this
as
well,
and
you
know
like
we
do
a
lot
of
work
with
nvidia
and
other
folks,
and
people
will
make
chips
and
things
along
that
nature.
I'm
curious
to
see
how
they
interact
become
aware.
Hopefully,
they'll
watch
this
and
become
aware
of
the
project
and
see
if
they
can
help
move
it
forward
as
well,
so
kudos
to
you
guys
for
getting
it.
A
This
far
going
from
a
paper
at
mit
which
we'll
put
the
link
up
and
I'll
take
a
a
look
at
that
myself
and
hopefully
other
people
will
and
collaborating
with
mit
ibm
and
netflix
and
mass
open
cloud
and
everybody
else
to
to
solve
their
use
cases.
I'm
really
going
to
be
looking
forward
to
seeing
how
this
comes
into
the
open
shift
release.
A
So
come
back
please
when
that
that
hits
come
to
the
okd
working
group
when,
when
you're,
ready
and
or
even
if
you
just
want
to
expose
this,
I
will
share
with
them
this.
The
video
that
we're
making
today
and
we'll
we'll
make
sure
that
there
it's
on
their
radar,
because
I
think
that's
and
there's
also
a
lot
of
security
folks
that
are
part
of
openshift
commons.
A
That
I
think,
will
be
very
interested
in
this
as
well.
So
I'm
I'm
really
looking
forward
to
seeing
and
helping
you
guys
grow
this
community
and
you
know
totally
thrilled
that
you've
gotten
to
sandbox
it'll
be
interesting
to
see
how
long
it
takes
to
incubate
you
guys
and
maybe
get
you
to
be
an
official
one
and
if
you
end
up
being
officially
in
cncf
or
if
you
find
that
you
are
playing
more
on
the
edge
in
the
iot
space
so
and
need
more
of
a
generic
home.
A
But
I
think
the
kubernetes
community
is
going
to
really
appreciate
this
and
embrace
it.
So
I'm
looking
forward
to
that.
The
other
question
I
have
for
you
guys
is
well
there's.
Actually
there's
one
came
in
this
key
lime
act
at
the
pod
level,
also
to
ensure
pod
security,
and
not
only
at
the
infra
vm
level.
That's
okay!
Once
again
it's
in
the
chat
there!
Oh.
C
Yeah,
that's
a
very
good
question
so
as
far
as
measuring
trust
within
a
container,
this
is
something
that
we're
looking
at
and
what
we
need
to
do
is,
as
you
would
have
seen
earlier
mentioned,
that
we
use
something
called
ima
in
the
linux
kernel
which
is
used
to
measure.
So
what
happens
is
when
a
syscall
is
made
ima
will
measure
the
object.
That's
it
that's
requesting
that
system
call
so
ima
sits
alongside
sc
linux,
the
linux
security
services.
C
So
what
will
happen
is
if
you
run
a
script
as
root
that
will
be
measured,
it
will
be
put
into
the
tpm
signed
and
then
sent
to
the
remote
verifier
to
verify.
Okay,
now
for
us
to
get
that
to
work
in
a
container,
we
need
an
ima
namespace
and
we're
actively
working
on
that.
I
saw
some
people
in
the
linux
kernel,
so
we
fully
anticipate
being
able
to
do
this
same
level
of
trust
measurement
within
a
within
a
pod
if
a
pod
is
essentially
a
container.
C
A
And
the
the
eta
for
it
landing
in
the
linux
kernel,
namespace.
C
I
I
wish
I
knew
it's
things,
get
discussed
a
lot
on
the
linux
kernel
made
in
list
before
so
so
we're
just
trying
to
sort
of
brush
out
an
agreement
and
and
alleviate
any
sort
of
points
of
contention
that
always
do
come
up
on
the
lynx
kernel.
If
you're
trying
to
get
something
like
that.
A
So
I
know
kubecon
is
coming
up
soon
november
17th,
do
you
have
any
birds
of
a
feather
or
meetings
or
any
back?
Did
you
get
any
space
at
kubecon
or.
A
So
well,
then,
what
we'll
just
have
to
do
is
make
sure
everybody
shows
up
at
your
wednesday
meetings
and
continue
to
push
people
to
come
and
find
you.
C
Yeah
we
we
there's,
there's
lots
to
do
we're
we're
a
pretty
happy
friendly
community.
So
you
know
that
we
have
a
policy
of
there's
no
stupid
questions
when
you're
standing
up
key
line,
you
know,
and
so
yeah
you'll
find
us
on
the
cncf
slack.
There
is
there's
always
you
know,
people
chatting
on
there.
B
All
right,
yeah
and
I
was
going
to
add-
I
mean
diane
and
everybody
else.
If
you,
if
you
find
new
cool
ways
of
using
key
lime,
you
know
don't
hesitate
to
come
and
share
them.
I
mean
it's
a
fun.
It's
a
fun
thing
to
think
about
how
you
can
use
this
in
ways
that
might
not
have
been
initially
intended,
but
can
actually
be
useful,
so
don't
hesitate.
A
Well,
I
think
the
raspberry
pi
garage
opener
example
use
case
is
probably
my
favorite
of
the
day
so
and
so
that
I
think
that'll
be
that'll,
be
interesting.
We
have
a
whole
bunch
of
raspberry
pies
around
my
house,
so
hopefully
we'll
we'll
do
that
in
our
spare
time,
which
we
all
have
so
much
of
so
I
was
gonna,
say
one
more
time
to
the
audience
out
there.
A
If
you
have
other
questions,
please
speak
up,
throw
them
in
the
chat,
we'll
give
you
a
couple
more
seconds
and
then
we're
going
to
let
you
guys
go
back
to
making
key
lime,
pies
or
key
lime,
rusted
or
whatever
it.
C
A
I
have
no
puns
left
today.
It's
been
a
long
weekend
and
when,
when
you
get
the
fedora
core
os
things
in
please
ping
me
and
we'll,
have
you
back
and
we'll
have
you
back
with
the
okd
working
group
as
well?
And
maybe
just
maybe
we
can
stand
up
a
couple
of
examples
of
this
and
and
have
the
demo
run
with
okd,
which
would
be
one
of
my
happy
days
as
well.
So.
A
I
think
that
that
would
be
a
great,
maybe
a
sub
spin-off
group
from
the
okd
working
group,
because
I
know
there's
a
lot
of
interest
and
a
good
and
we're
really
thrilled
about
the
collaboration
between
the
fedora
core
os
and
the
okd
community.
There's
a
lot
of
cross
pollination
there,
so,
hopefully
we
can.
We
can
make
something
happen
for
you,
as
well
as
other
kubernetes
folks
out
there
as
well.
Well,
this
is
something
that
I
I
should
hope
we
can
get
into
the
slipstream
upstream
of
kubernetes.
A
Sooner
than
later,
though,
everything
takes
time,
especially
when
it's
this
low
level.
So
here's
to
I'm
hoping
that
linux
kernel
folks,
listen
to
you
and
incorporate
your
requests
and
get
you
moving
down
the
path
soon.