►
From YouTube: Lighting Talks - Dan Lorenc, Aysylu Greenberg
Description
Lighting Talks - Dan Lorenc, Aysylu Greenberg
Join us for KubeCon + CloudNativeCon in Shanghai June 24 - 26 and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
When
I
started
s,
todos
welcome
everyone
how's
everyone
feeling
great
some.
The
first
lightning
talks
I'll
try
to
get
their
energy
African.
So
today,
I
will
talk
about
software
supply
chain
management
to
put
gracias
in
kritis.
Who
here
has
heard
of
the
project?
Gracias
raise
of
hands?
Okay,
great?
What
about
Critias
sweet
awesome?
So
for
those
of
you
who
know
what
the
projects
are,
it
will
be
a
quick
review
and
for
those
of
you
who
don't
will
talk
more
about
it,
don't
worry,
you'll
know
all
about
it.
A
At
the
end
of
this
ten
minutes,
quick
overview
am
I,
still
the
Greenberg
and
the
Angela
Don
gracias
and
Critias
projects
and
I'm
online
on
Twitter
talking
about
graphics
and
credits.
If
you
want
to
follow
so
so
we're
exactly.
This
graph
is
included
in
the
software
supply
chain.
So
we
have
our
engineer,
who
builds
code
and
commits
it
to
the
code
base
and
then
she
wants
to
build
and
deploy
it,
and
so
she'll
do
this
using
the
CI
CD
pipelines
that
we'll
be
learning
about
all
day
today.
A
Doing
different
types
of
analysis,
for
instance,
license
analysis
on
the
code
base
and
then
deploy
checks
before
deploying
to
production,
ok,
and
so
all
the
different
types
of
metadata
that
are
generated
by
the
CI
CD
pipelines
can
actually
be
stored
in
gracias.
So
information
like
vulnerabilities
and
build
information
source
information
so
on
so
gracias
is
a
centralized
metadata,
knowledgebase
and
all
the
different
cic.
A
The
vendors
just
need
to
know
how
to
speak
the
graphics
format
and
to
be
able
to
store
it
in
a
graph
as
back
to
storage
system
and
now
Critias,
it
will
be
used
for
deploy
time
policy
verification.
So
it's
an
admission
controller
that
basically
will
check
policies
so,
for
instance,
severity
of
vulnerabilities
or
location
of
image.
Does
it
come
from
a
trusted
location?
And
then,
if
everything
passes
the
checks,
then
it
will
be
deployed
to
production.
A
Otherwise
the
image
will
be
rejected
from
entering
production,
and
then
there
will
be
some
clarification
for
why
and
so
now,
kritis
itself
doesn't
store
any
information
about
vulnerabilities.
None
of
that
it
actually
talks
to
graph
is
to
retrieve
that
metadata
information.
So
now
we
can
create
rich
policies
based
on
the
custom
set
of
things
that
you
care
about
at
deploy
time.
So
this
is
the
overall
how
graph
is
in
credits
where
they
come
in
in
the
software
supply
chain
and
how
they
interact
with
CI,
CG
pipelines
and
graduates
increases?
A
Are
not
new
projects
they've
been
around
there's?
Actually,
google
internal
implementations
of
them
that
are
available
proprietary
products,
and
so
now,
let's
talk
about
graphics,
a
little
bit
in
more
detail,
so
I'll
be
referring
to
it
as
artifact
metadata
API.
So
let's
unpack
that
a
bit
so
artifact
refers
to
images.
Binaries
packages
metadata
describes,
build
deployment,
vulnerability
licenses
any
of
the
type
of
metadata
you
might
care
about
in
in
your
artifacts
and
then
API
is
storing
and
retrieving
metadata
about
those
artifacts.
Okay.
A
So
now
there's
a
couple
points
of
terminology
that
will
be
useful
to
understand
what
graph
is
does
on
how
it
does
that,
so
notes
are
high-level
descriptions
of
types
of
metadata.
For
instance,
common
vulnerabilities
and
exposures
known
as
CVS
and
the
industry
will
be
served
as
a
vulnerability.
Note,
and
then
occurrences
are
instances
of
nodes
in
a
specific
artifact.
So,
for
instance,
a
presence
of
a
vulnerability
in
an
image
will
be
stored
as
a
vulnerability
occurrence
now.
Resource
urls
are
unique
identifiers
for
artifacts
in
a
current
system.
A
So
that's
how
you're
able
to
tell
given
this
image.
What
is
its
build
information
and
it's
a
vulnerability
information
so
and
be
able
to
link
across
all
the
different
metadata
that
you
have
stored
in
graphics
and
be
able
to
reason
about
it,
and
so
you'll
have
different
ways
to
refer
to
it
like
Debian
packages
and
darker
images,
and
generic
files
will
have
a
way
to
describe
them
in
a
resource
URL.
A
Now
we
have
kind
specific
schemas
which
has
strict
schemas,
which
allow
us
to
first
of
all
evolve
our
schemas
and
by
adding
new
metadata
types
as
we
find
needs
and
also
allows
us
to
describe
in
in
a
way
that
is
vendor
agnostic.
So
across
different
vendors,
all
the
necessary
information
to
describe
builder
vulnerabilities
is
described
in
a
corresponding
schema.
A
So,
for
instance,
we
have
deployment
notes
which
describes
an
artifact
that
was
deployed
in
some
runtime,
and
so
then
resource
URI
will
describe
the
resource
that
was
deployed
in
a
deployment
note
and
then
deployment
occurrence
will
have
information
like
who
deployed
it.
Users,
email
deployed
time
may
be
unemployed
time,
and
so
on
so
graph
is,
is
an
open,
artifact
metadata
center
with
contributions
already
from
the
industry,
it's
developed
out
in
the
open
source.
It's
it's
used
to
audit
and
govern
your
software
supply
chain.
It's
the
knowledge
base
for
hybrid
cloud
solutions.
A
So
if
you
have
on
promises
and
cloud
clusters,
it's
meant
to
support
both
and
it's
an
API.
We'll
talk
about
storage,
backends,
I,
didn't
go
into
examples
of
this,
but
you'll
see
in
a
moment
how
it
doesn't
really
matter
what
storage
backs
the
graph
is.
We
have
implementations
against
progress
and
spanner
and
so
on,
and
it
just
works.
It's
the
universal
metadata
API.
So
what's
kritis,
it's
deploy
time
policy
verifier.
So,
instead
of
talking
what
are
the
different
features
that
has,
let's
run
through
an
example
and
then
you'll
see
exactly
how
it
works.
A
So
we
want
to
deploy
our
e-commerce
websites.
So
the
admission
flow
looks
like
this:
would
your
coop
control
apply
side
Yammer,
which
describes
our
e-commerce
side
and
then
so
we'll
send
this
command
into
the
kubernetes
cluster,
which
has
Critias
installed
inside
edie,
using
the
helm,
install
command?
A
If
you
don't
know
what
sir
DISA
we'll
take
a
brief
look
at
what
they
are
and
you'll
hear
a
lot
more
about
them
at
coupe,
come
this
week
so
now
image
security
validator
will
be
used
to
actually
enforce
those
policies
and
we'll
fetch
metadata
for
the
vulnerability.
So
image
security
policy
describes
what
vulnerabilities
we
are
willing
to
tolerate
on
an
any
given
image
that
we're
about
to
deploy
and
so
will
fetch
metadata
from
Dreyfus
about
vulnerabilities.
So
now,
oh
no,
but
with
vulnerability,
scan,
hasn't
finished
and
so
graduates
doesn't
have
any
information
about
vulnerability.
A
We
can
fetch,
and
so
that
point
will
reject
this
part
and
will
reject
the
request,
because
we
want
to
make
sure
that
every
single
image
that
comes
in
if
image
security
policies
defined
actually
has
verified
a
vulnerability
scan
has
run
so
now.
Vulnerability
scanning
is
finished
and
we
found
this
vulnerability.
Okay,
so
now
graphics
is
able
to
retrieve
that
in
return,
our
two
critters
and
the
part
is
denied,
because
now
we
have
some
vulnerability.
So
now
we
investigated
vulnerability.
Analysis
is
a
very
hard
problem
and
we
find
out
that
it
actually
doesn't
affect
our
image.
A
So
we
can
just
white
less
said:
okay,
fine!
So
then
we
write
it
listed
and
then
the
part
is
admitted
great.
So
now
we
succeeded-
and
so
now
it's
time
to
scale
up
your
website
now
that
you
have
one
part,
let's
go
or
up
to
four
and
we'll
do
this
for
the
coop
control
scale
command
and
we
brought
up
two
pods
now
and
as
the
third
and
fourth
product
coming
up
vulnerability
analysis
that
runs
separately.
A
Maybe
you
pay
a
vendor
to
do
it
or
you
do
it
yourself
in
a
separate
process,
finds
in
your
vulnerability,
and
so
what
does
this
mean?
If
we
find
in
your
vulnerability
and
now
it
violates
the
existing
security
process?
Does
that
mean
we
can
no
longer
scale
up
that
we
disrupt
our
website?
That
will
disrupt
our
customer
experience
right.
So
to
prevent
this,
what
we
have
is
critics
at
the
stations
that
comments
so
taking
a
step
back
when
we
first
deployed
the
first
part.
A
What
happens
is
greediest
has
an
a
tester
inside
that
we
actually
cluster
admin
specifies
using
a
gestation
authority
CD
that
any
admitted
image
will
also
have
an
attestation
written.
So
then,
we'll
store
at
the
station
for
the
admitted
image
via
Gracias
and
then
any
and
then
when
a
request
to
scale
comes
in,
even
if
we
find
a
new
vulnerability,
but
we
admitted
this
image,
then
we'll
fetch
at
the
stations
for
the
admitted
image
and
then
allow
it
to
scale
up
so
now
we're
good
and
now
to
make
sure
that
we
actually
already
discovering
your
vulnerability.
A
So
if
heartbleed
bug
is
discovered
right,
we
don't
want
to
just
keep
running
our
e-commerce
website
and
never
find
out
about
it.
So
there
is
a
background
crumb
that
runs
and
looks
at
all
the
running,
pods
and
then
layer
adds
labels
and
annotations
to
it.
So
the
cluster
admin
can
verify
that
these
running
pods
are
no
longer
satisfying
image
security
policy.
Great.
So
a
couple
of
points
of
terminology,
we
discussed
some
research
definitions
for
storing
policies
as
kubernetes
objects.
A
This
is
just
an
example
and
it
could
be
very
expressive,
so
kritis
was
developed
first
and
foremost
open
source
and
it
was
built
to
put
the
community
with
the
industry
contributions.
It
plugs
into
kubernetes
admission
controller
and
ensures
the
vulnerability
scanning
Israel
if
we
actually
enforce
image
security
policies
before
appointment
and
a
test
images
and
verifies
them
before
deployment
and
applies
consistent
polish
deploy
policy
across
communities
environments.
A
So
a
very
exciting
thing
that
is
happening
right
now
in
the
graph
is
in
kritis
world
is
the
launch
of
0.1
point:
oh
release
that
I've
been
working
on
I'm,
we're
hoping
to
release
it
this
bottom,
so
there's
three
goals
for
this
release.
One
of
them
is
to
enable
users
to
start
experimenting
with
graduates
and
critters
on
their
laptops,
desktops
and
so
on.
We
are
moving
towards
hybrid
cloud
support
and
we
would
like
to
gather
community
feedback
from
all
of
you
once
you
try
it
out
to
help
us
prioritize.
A
What
features
are
most
needed
in
the
industry?
What
features
are
most
relevant
and
most
helpful
for
you
to
be
productive?
The
scope
is
to
have
standalone
credits
on
kubernetes
with
standalone
graph
is
sort
of
we're
talking
to
each
other
and
the
two
kind
of
user
journeys
you
can
think
about
it
work
a.
What
can
you
do
with
that?
A
So
you
should
be
able
to
allow
the
appointment
of
a
container
to
kubernetes
cluster
that
passes
all
the
checks
and
then
block
deployment
of
an
unadmitted
container
to
the
question
actually
see
it
fail
when
it's
something
is
wrong
with
it.
The
features
are
being
added.
Just
a
quick
overview
home
chart
for
graph
is,
and
a
published
image
and
standalone
graph.
A
survey
with
phosphorous,
storage,
back-end
is
up
and
running
and
functional,
and
then
basic
support
for
go
client
library.
We
have
not
forgotten
about
many
of
you
who
are
using
other
languages.
A
It's
just
that
for
this
specific
release,
we'll
focus
on
one
language
that
was
the
most
requested
language
from
the
community
and
then
for
those
of
you
who
use
Python,
Ruby,
Java
and
so
on.
Let's
continue
working
and
talking
with
each
other,
so
we
can
prioritize
this
accordingly
and
now
in
kritis
we're
adding
generic
attestation
policies
so
that
cluster
admin
can
actually
say.
Okay,
this
image
is
fine,
don't
worry
about
running
any
checks.
I
trust.
A
This
I
have
manually
verify
this
so
from
now
on,
admit
all
the
images
that
I
verify
and
then
also
you
could
say,
you
know,
don't
admit
any
of
the
images
I
haven't
there.
So
generic
attestation
policy
basically
allows
you
to
do
that,
and
now
the
Fort
Edmonton
fold
back
policy
is
we
want
to
make
sure
that's
well-defined?
What
if
you
don't
specify
any
policies
and
we
want
to
make
sure
the
credits
is
configurable
for
all
of
your
different
needs.
A
So,
if
you'd
like
to
learn
more
and
follow
along,
the
two
projects
are
on
github
under
a
graphics
organization,
graphics
and
kritis.
There's
two
Google
Groups
if
you'd
like
to
start
using
gracia
synchronous,
graphics,
users
and
kritis
users,
and
then,
if
you'd
like
to
be
contributing,
there's
a
graphics,
they
have
group
and
we're
also
on
Twitter
on
graffia,
say
Oh.
So
now
that
I've,
clearly
the
talk
I'll
concluded
with
a
question.
How
many
of
you
are
now
interested
in
trying
it
out
and
or
contributing
to
graphics
and
creative,
show
your
hands
great?
A
B
Hello,
how
are
you
guys
doing
so?
My
name
is
Erin
Ericson
I
work
for
Salesforce
I
am
on
the
product
side,
but
I've
spent
most
of
my
career
as
a
software
developer
and
I've
had
a
lot
of
experience,
fighting
bad
tests
and
having
to
deal
with
that
kind
of
reality,
so
as
I've
thought
about
this
and
how
it
would
help
this
kind
of
overall
ecosystem
of
things
that
we
have
in
the
continuous
delivery
foundation.
B
My
thought
is
what,
if
we
did
something
interesting
so
today
we're
going
to
talk
about
something
I
like
to
call
a
declarative
quality
engine
that
I'm
naming
CQ
o,
which
stands
for
chief
quality
officer,
so
there's
that
we
might
figure
out
a
better
name,
I
literally
just
named
this
today.
So
if
you
have
a
better
idea,
please
talk
to
me
so
the
idea
here,
though,
is
we
talked
about
continuous
delivery.
We
talked
about
continuous
integration.
B
B
There
are
budget
limits,
so
you
know
here
at
salesforce
we
have
a
rather
large
code
base
with
about
a
million
tests.
It
is
not
viable
financially
to
run
each
test
that
we
have
for
every
single
pull
requests
that
we
get
in
our
code
base.
So
therefore,
you
know
there
are
budgetary
limits
we
could
easily
spend.
You
know
five
billion
dollars
a
year
running
tests
that
would
pretty
much
eat
our
entire
profit
margin.
So
we
probably
shouldn't
do
that.
B
So
you
know
any
kind
of
system
like
this
that
uses
declarative
a
declarative
engine
has
to
say
well
what
is
our
budget
and
what
do
we?
How
do
we
stay
within
that
limit?
Obviously
we
know
the
earlier.
You
catch
a
bug,
the
cheaper
it
is
to
resolve,
and
so
generally,
what
you
want
to
do
is
kind
of
solve
it.
You
know
understand
whether
you
have
a
defect
as
soon
as
you
possibly
can.
B
B
B
Oh
I've,
never
seen
that
before
how
many
of
you
have
ever
worked
in
an
organization
that
said,
thou
shalt
have
code
coverage
to
a
certain
percentage
and
then
saw
some
really
bad
tests
happen
as
a
result
of
that
yeah
everybody
in
this
room,
if
you
have
it
you're
lying
I've
story,
I
can
tell
you
not
at
Salesforce
all
our
tests
are
perfect,
but
but
there's
you
know,
you
know
I've
been
doing
this
for
a
while
and
I've
seen
places
that
I
have
some
more
stories,
but
there's
probably
low-quality
tests.
B
We
also
know
that
the
quality
isn't
just
your
unit
tests,
okay,
who
here,
has
a
perfect
unit
test
pyramid
yeah.
Nobody
has
a
perfect
unit
test
period
that
you
know,
pyramid
that
doesn't
really
exist.
I
mean
we
all
aspire
to
it
right.
We
all
aspire
to
have
massive
number
of
unit
tests
that
run
very
very
quickly
and
they're,
just
a
small
number
of
end
n
tests
that
take
a
long
time,
but
the
reality
is.
B
B
Sets
all
sorts
of
different
things
you
can
run.
You
also
have
a
set
of
constraints
right.
You
have
a
set
of
policies,
you
have
a
set
of.
You
have
a
budget.
A
budget
is
a
constraint.
Just
like
any
other
constraint.
You
have
different
kinds
of
environments
that
you
have
to
run
on
and
so
forth.
You
have
targets
so
so
you
might
want
to
say
at
a
certain
level
of
pass
rate
at
a
certain
stage.
I
consider
that
to
be
okay,
you
might
allow
for
a
certain
amount
of
non
determinism
in
certain
kinds
of
tests.
B
In
fact,
it's
generally
seen
is
a
good
thing,
because
sometimes
you
can
model
non
determinism
in
your
tests
that
you
expect.
You
also
might
just
say,
I
business
outcome
expectations.
You
know
these
are
you
know
different
kinds
of
tests
that
I
want
to
say:
I
got
this
much
lift
and
you
might
want
to
say
what
those
targets
are.
B
So
those
are
your
inputs.
Moreover,
we
want
to
run
the
right
tests
at
the
right
time,
there's
all
sorts
of
different
kinds
of
tests.
If
you
look
at
this
anymore
kind
of
holistic
way,
where
you
know
smoke
test,
someone
I
want
to
run
early
I
want
to
run
certain
kinds
of
security
tests
very
very
quickly
before
somebody
checks
in
I
want
to
know,
if
somebody's
putting
in
a
secret
into
my
repository
and
be
able
just
reject
that
right
before
eating
before
I,
do
a
pull
request.
Okay,
I
want
to
be
able
to
do
smoke.
B
B
I'm
gonna
wait
till
the
end
until
it
passes
other
test
suites
that
I
have
most
of
us
know
this,
and
we
configure
our
CI
CD
pipelines
to
do
these
things,
which
means
every
one
of
us
in
this
room,
is
doing
something
that
a
machine
could
do
and
make
choices
for
us
independent,
but
we're
doing
it
ourselves.
We
don't
have
to
okay
at
the
end,
after
a
deployment
there's
even
tests
like.
Is
it
actually
achieving
the
outcome?
I
want?
Is
it
getting
the
conversion
I
want?
B
I
might
want
to
do
other
kinds
of
testing
actually
in
production,
and
so
we
can
actually
manage
that
as
well,
but
we
do
want
to
be
conscious
of
the
fact
that
so,
obviously
the
ones
to
the
left
are
cheaper.
The
ones
to
the
right
are
more
expensive
and
that
should
become
part
of
any
kind
of
engine.
That's
making
decisions
for
us.
B
So
what
would
be
the
components
of
something
like
this?
Okay,
you
would
have
something
called
a
test.
Prioritize
err.
What
does
that
do
a
test?
Prioritize
er
says
based
on
the
change
you're
making,
based
on
your
pull
request.
You
will,
you
know,
probably
want
to
run
these
tests
and
maybe
not
these
tests
over
here.
That
becomes
one
of
the
levers
that
we
can
use
to
manage
our
our
testing
budget
and
also
manage
down
the
amount
of
time
it
takes
to
get
feedback.
B
That's
useful
to
somebody
we
may
want
to
be
able
to
say
which
tests
are
better
or
not.
Okay,
not
all
tests
are
equal
value.
Just
on
its
face
there,
there
are
high
quality
tests
that
actually
assert
things
that
actually,
you
know,
run
through
the
right
parts
of
the
code
base
that
run
quickly
and
there's
low
quality
tests.
Okay,
we
ought
to
be
able
to
rank
those
and
choose
them
based
on
their
rank.
We
ought
to
be
able
to
have
an
engine
which
is
kind
of
like
a
controller
that
can
inject
the
right.
That's
a
test.
B
You
know
the
right
phase
of
the
CD
pipeline,
according
to
whatever
kind
of
quality
outcomes
that
we're
looking
to
get.
We
need
something
to
manage
our
tests.
You
know
environments,
we
need,
you
know
an
API
first
design,
I
think
that's
pretty
obvious
most
people
in
here,
but
we
also
need
things
to
be
plug-in
driven
so
that
we
can
compose
these
different
kinds
of
workflows.
You
know
actually
in
real
time,
okay
and
be
actually
be
able
to
make
decisions,
and
you
know,
run
them
in
different
ways
as
the
system
gets
smarter.
B
So
how
would
we
build
something
like
this
and
again?
This
is
an
early
conversation.
You
know,
unlike
the
last
talk,
this
is
something
that
we
are.
We
have
parts
of
it,
but
we're
looking
for
help
in
the
community
to
decide
one.
Is
this
a
good
idea
to
do
and
I
hope
we
talk
about
that
later,
but
to
what?
B
How
would
we
go
about
actually
making
this
real,
so
there's
components
that
go
into
a
system
like
this,
the
rancor,
the
prioritize
err,
you
know
talking
before
you
know
the
CI
systems
actually
produce
a
set
of
useful
things
and
a
set
of
useful
data
that
I
call
or
that
we
call
CI
nouns
or
you
know
this
is
the
set
of
test
results.
Is
there
a
canonical
way
that
we
can
talk
about
what
test
results
are
in
a
way
that
a
system
like
this
can
reason
about
them
right?
That
could
be
really
actually
really
interesting.
B
So
we'd
start
with
these
components
that
are
independently
useful
outside
of
a
a
system
that
you
know,
orchestrates
is
stuff
using
machine
learning
to
figure
out.
Then,
what
are
the
ways
that
we
map
these
tests
to
any
kind
of
constraints
that
we
might
have,
so
we
can
kind
of
meet
the
broader
goal
of
actually
managing
to
a
budget
or
actually
trying
to
have
a
system
that
puts
the
right
tests
in
the
right
place
over
time
and
get
smarter
about
that?
B
Okay,
so
that'd
be
kind
of
more
of
a
phase
two
thing
that
we'd
like
to
make
happen
eventually.
So
what
are
outcomes?
So?
What
do
we
get
by
doing
all
this
kind
of
stuff?
You
know
what
I
want
to
get
out
of
the
business
of
doing
is
deciding
which
tests
run
in
what
phase
of
the
pipeline
I
think
a
system
can
be
smarter
than
I
will
be
about
what
kinds
of
decisions
I
would
make
in
that
place.
B
I
want
a
system,
that's
smart
enough
to
make
the
trade-offs
and
manage
the
trade-offs
with
regard
to
which
tests
run
where
okay,
I
don't
know
that
you
know
as
humans.
We're
always
smart
about
those.
In
fact,
as
humans
we
are
very
loss
averse
and
so
will
tend
to
run
more
tests
than
we
need.
Most
of
so.
How
do
we
may
have
something
that
kind
of
more
scientifically
manage
those
trade-offs?
And
then
you
know
over
time
as
the
system
gets
smarter,
it
can
figure
out
what
tests
can
go
further
to
the
left.
B
What
tests
run
faster
help
me
actually
manage
my
test
base
in
a
smarter
way.
A
system
like
this
ought
to
be
able
to
tell
you
what
tests
are
bad
and
give
you
advice
about
how
you
might
refactor
that
stuff,
so
a
lot
of
really
kind
of
interesting
outcomes
that
could
come
from
this.
So
in
the
spirit
of
a
lightning
talk,
my
question
and
again,
we
don't
have
to
do
it
here
on
the
stage,
but
let's
talk
today
find
me
after
the
talk
find
me
I'll
be
around.
Is
this
something
that's
useful?
B
Is
this
something
that
you
would
be
interested
in
working
with
us
on?
We
actually
have
something
that
does
test
priorities
test
prioritization
inside
of
Salesforce.
We
intend
to
open
source
at
and
contribute
that,
but
if
there's
other
pieces
that
ecosystem
that
you
think
would
be
useful
or
that
we
should
kind
of
take
into
this,
as
we
start
to
you
know,
make
this
more
public
I
would
be
very
interested
in
hearing
from
everybody
or
anybody.
That's
also
interested
here
today,
all
right.
C
Right
good
evening
folks,
my
name
is
Balaji
Siva
I'm
part
of
the
ops
MX
TV
era,
startup
based
in
San
Francisco
Bay
Area.
Today,
I
want
to
talk
about
operator
for
spinnaker
before
I
get
started,
how
many
people
are
using
spinnaker
in
production
all
right,
not
not
many,
actually,
how
many
are
interested
in
trying
spinnaker
quite
a
lot.
Ok,
excellent,
so
I
think
so.
C
The
spinnaker
obviously
is
a
great
multi
cloud
deployment
tool,
so
you're
able
to
deploy
to
multiple
clusters
or
multiple
environments.
We
thought
of
having
to
deploy
a
spinnaker
per
service
per
per
region,
for
example,
so
that
is
something
very
much
possible.
It
scales
really
well
and
but
also
there
have
been
use
cases
where
people
have
asked
to
deploy,
spinnaker
for
a
single
group
of
developers
or
a
team
or
project
or
etc.
So
you
have
a
multiple
instance
of
spinnaker.
Running
spinnaker
itself
is
a
fairly
10
micro
service
application.
C
C
You
typically
have
to
have
database
for
both
the
pipeline
configuration,
which
is
usually
a
menu
or
Amazon
s3
4,
for
example,
and
also
you
need
another
database
for
maintaining
the
runtime
information
like
what
the
pipeline
and
for
me
what
pipelines
are
executed,
extracts
the
trap,
because
you
want
to
make
sure
that
when
a
spinnaker
comes
back
up,
you
won't
have
all
the
information
available.
So
this
is
usually
how
the
application
looks
like.
So
the
halyard
is
the
current
model
of
deployment
of
spinnaker.
You
know
installer
installation
and
upgrade,
etc.
C
I
think
we,
but
you
have
to
it's
still,
not
I,
guess
seamless,
as
many
people
want.
One
of
the
common
things
I
hear
from
smaller
teams
is
that
snicker
is
a
little
heavy
for
my
use
case.
Right
I
mean
now
you
used
to
using
scripts,
for
example,
and
this
is
like
a
heavy
approach
to
for
deploying
a
few
applications.
C
So
that's
why
the
operator
is
an
interesting
use
case
that
you
interesting
interesting
opportunity
here
when
people
are
familiar
with
operator
concept
in
communities,
ok,
a
few
people
so
essentially
operator
is
that
was
introduced
recently
that
you
know
maybe
a
couple
of
years
now.
At
this
point
it
allows
you
to
manage
the
lifecycle
of
kubernetes
application.
In
this
case,
the
kubernetes
application
itself
is
a
spinnaker
itself,
so
now
you're
managing
the
spinnaker
as
an
application,
and
you
have
to
worry
about
the
day
zero.
C
One
day,
one
and
day
two
activities
day,
one
could
be
installation
and
data
could
be
upgrade
scaling,
etc,
etc.
So
operator
makes
it
very
very
simple
to
do
that.
There's
a
there's,
an
industry
effort
around
this
topic.
There
are
multiple
phases
in
which
you
can
build.
An
operator
operator
is
as
an
SDK,
so,
for
example,
it
doesn't
have
to
be
for
communities.
Obviously
you
can
take
any
of
your
communities
for
spinnaker.
C
For
example,
you
take
any
of
your
particular
communities
application
and
create
an
operator
for
it
essentially
you're
taking
the
average
operational
aspects
and
putting
it
as
part
of
the
to
maintain
all
of
this.
In
an
easy
fashion,
so
you
have
phase
1
phase
2
phase,
3
phase
4
phase
5.
So
essentially
you
can
look
at
it.
They
think
of
this
as
if
you're
having
a
state
full
applications-
and
you
want
to
maintain
the
upgrade
and
life
cycle
management,
you
can
do
the
discipline
of
the
operator.
C
The
available
operators
are
having
the
publicly
available
operators
are
on
the
website
called
operator
hub,
I.
Oh,
this
is
a
so
you
can
see
a
lot
of
operators
already
there,
and
so
you
can
take
existing
available
operator
and
be
able
to
deploy
to
your
kubernetes
cluster.
If
you
want
to
build
a
custom
operator
for
your
own
internal
application,
you
can
do
that
and
that's
the
benefit
you
get
with
this
thing.
So
in
this
case
we
have
an
operator
for
spinnaker
that
we
have
that
you
have
already
in
available
for
you.
C
So
in
this
you
can
go,
go
here
and
you'll
be
able
to
install
this
operator
to
get
up
and
running
with
with
the
spinnaker
very
quickly.
Our
goal
here
is,
in
this
case
it's
just
a
level
one
right
now,
it's
basic
installation
which
is
basically
you
still
use
as
a
yard
and
users
mean
you
know
it's
maintaining
you
straight
up
that
and
within
Holly
art
we
have
via
make
sure
that
you're
able
to
do
other
services
and
and
maintain
the
state
of
all
the
other
services.
C
C
So
if
you
want
more
information
about
this
particular
operator
and
how
you
can
use
it,
there's
a
there's,
a
blog
that
we
wrote
it's
on
the
ops
MX
website,
it's
a
spinnaker
operator
guide
and
it
also
has
a
video
that
you
can.
You
can
watch
which
shows
you
how
to
install
the
operator
I'm
not
going
to
go
through
it,
because
you
can.
You
can
watch
it
later,
but
more
information
definitely
reach
out
to
me
and
I'm
gonna.
Give
you
some
URLs
here
that
you
can.
You
can
actually
reach
out
as
well,
so.
C
Operator
hub
Tahoe
is
where
you
go
download
the
operator,
and
you
also
go
to
the
our
blog
to
make
sure
that
you
understand
what
it
is
and
how
to
install
the
operator
and
watch
the
video.
If
you
have
any
questions
or
feedback
like
I
said
this
is
a
v1
version
we're
gonna
have
to.
We
will
be
doing
more
happy
to
get
feedback
and
send
us
an
email
at
hello,
adapt
some
XCOM.
Thank
you.
D
Hey
everyone:
my
name
is
Kate
Williams
and
I.
Work
for
Microsoft
I
am
in
the
office
of
the
CTO
for
Asher
and
I
wanted
to
take
a
minute
to
talk
with
you
today
about
software
supply
chain
security
and
some
of
the
thinking
we've
been
doing
at
Microsoft.
And
my
reason
for
this
is
that
I,
it's
an
industry
problem
and
I'd
love
to
work
with
some
of
you
folks,
on
the
same
issue.
D
D
D
If
that
site
goes
down
or
becomes
compromised,
then
we're
also
interested
in
validating
these
things,
validating
artifacts
for
reliability
even
before
we
bring
them
into
the
developer
system,
then,
once
all
artifacts
have
gone
through
some
sort
of
ingestion
process,
they'd
follow
through
the
regular
through
developers
or
developer,
builds
stable,
builds
and
then
the
release
process.
So
that's
where
we
want
to
be.
D
One
of
the
things
that
we
find
is
that
this
ingestion
section
that
I
talked
about
where
we
want
so
many
things
to
happen
where
some
of
our
teams
are
doing
those
things
today,
they're
happening
manually.
We
don't
have
good
strong
processes
around
that
and
that's
something
where
we
really
want
to
beef
up
for
ourselves.
But
we
also
know
the
many
Microsoft
customers
and
just
people
in
the
industry
have
similar
concerns.
D
D
The
goal
for
the
security
framework
is
that
software
can
move
securely
through
the
supply
chain
and
we
can
have
signing
and
policy
and
validation
at
each
step
along
the
way.
One
project
that
we've
looked
at
is
called
in
toto
and
for
that
project
they
have
the
concept
of
signed
metadata
that
describes
the
artifact,
including
the
license
and
the
build
steps.
They
also
have
a
way
of
describing
policy
for
identifying,
what's
expected
and
allowed
into
a
system,
and
they
also
have
a
method
for
inspecting
metadata
to
verify
that
artifacts
meet
policy.
D
It
is
possible
to
comp
compromise,
build
environments,
compromise
compilers,
even
so
that
they
output,
vulnerable
artifact
or
compromised
artifacts,
and
it's
very
difficult
to
even
know
that
that's
happened
and
the
way
to
get
around
that
is
to
have
multiple,
independent
systems,
building
artifacts
and
then
cross
checking
checking
those
supply
chain.
Security
I've
been
talking
about
this
as
something
that
we've
been
looking
at
inside
of
Microsoft.
D
D
You
know,
CDF
is
still
getting
started,
but
as
the
processes
come
in
place
to
create
new
working
groups
or
to
create
special
interest
groups,
this
is
certainly
an
area
that
Microsoft
is
interested
in
participating
in
and
if
there
are
others
of
you
that
are
as
well
I
welcome
you
to
contact
me
and
we'll
see
if
we
can
get
something
put
together.
That's
all
thank
you.
E
Okay,
it
really
is
that
the
era
of
Linux
on
the
desktop-
this
is
an
abridged
version
of
a
talking
when
you
give
tomorrow,
I'm
gonna
go
pretty
quickly
and
hopefully
of
interesting.
I've
got
a
bunch
of
live
demos,
so
I'm
gonna
talk
about
unit
testing
configuration
using
open
policy
agent,
I'm
Gareth
are
on
the
internet.
I
won't
go
into
who
I
am
too
much,
but
I've
just
had
a
new
job
at
sneaked,
doing
a
bunch
of
product
and
security
and
building
tools,
who's
come
across
open
policy
agent
before
scattering
of
hands
and
not
loads.
E
It's
one
of
the
projects
in
the
CN
CF
now
and
ultimately,
it's
a
a
component
of
a
lot
of
other
tools.
It's
used
as
a
generic
policy
enforcement
engine
and
eme
going
last,
because
other
people
were
talking
about
where
we
need
policy
and
opens
open
policy
agent
is
an
implementation
that
can
get
us
there
quickly.
It's
using
a
lot
of
other
tools,
gatekeepers
in
a
mission
controller
for
kubernetes
and
I'm
gonna
show
you
another
tool:
I've
been
building
using
it.
E
Ultimately,
the
short
version
is
that
open
policy
agent
you
bring
policy
described
in
a
language
called
rego
and
you
can
bring
data
and,
for
example,
you
might
have
a
bunch
of
information
of
vulnerability,
data
and
obeys.
The
engine
makes
the
decision
you
put
in
your
config
or
your
your
response,
or
your
request
or
whatever
it
is,
and
it
makes
a
decision
and
obviously
then
you
can
use
that
in
lots
of
different
contexts
and
the
one
that
I'm,
probably
most
interested
in
is
the
development
cycle.
E
So
if
you
like,
then,
whatever
you're
building,
you
might
have
some
local
development.
That
generally,
is
you
it's
your
and
it's
you
or
maybe
yacht
you
in
a
pair,
it's
fast
feedback.
You're,
like
your
build
release.
Your
build
test
cycle
wants
to
be
fast,
you're,
not
running
long
running
tests,
you're
getting
like
just
enough
work
going
continues
integration.
These
great
combines
things
together
from
multiple
different
people,
but
you
you
have
to
wait
for
the
CI
system
and
deploying
all
the
way
to
production
probably
takes
a
bit
longer
still
like.
E
All
of
these
are
good
things,
but
they
have
different
feedback
cycles.
Where
open
policy
agent
has
been
used
mainly
within
the
communities
communities,
so
far
has
been
on
that
further
away
from
the
developer,
it's
been
used
in
production
clusters.
It's
been
used
to
get
clusters
from
configurations
that
maybe
weren't
like
breach
some
organizational
policy,
that's
great,
but
it
doesn't
have
the
immediacy
of
the
feedback
that
I
think
you
can
have
as
well
so
I've
been
asking
well.
E
E
But
if
we're
writing
code,
should
we
be
writing
tests?
I
would
say
yes,
hopefully
other
people
agree
with
me
there.
So
I
want
to
introduce
a
new
tool.
I've
been
playing
with
it's
open
source
on
github
I've
shown
a
few
people,
but
I
don't
think
most
people
seen
it
and
it's
got
a
comp
test
and
it
tests
configuration
I'm
original
naming
is
hard.
So
as
an
example
of
that
from
the
community
space
and
like
given
a
deployment
standard
deployment,
we
can
write
our
policy
in
this
open
policy
agents
Rieger
language.
E
So
all
we're
saying
here
with
this
example
is
we
should
deny
and
we
wanted.
We
want
to
capture
bunch
of
things
that
we
want
to
deny.
We
want
to
say
well
actually,
if
that
input
has
a
kind
key
and
that
key
has
a
value
of
deployment
and
it
doesn't
have
basically
row
now's,
not
route
set
to
true,
like
c'est
deny,
and
what
we're
doing
here
is
basically
on
this.
With
understanding
of
the
communities
deployment,
description,
we're
saying:
hey
we're
not
running
any
route
containers,
that's
a
breach
of
our
policy.
That's
our
organizational
policy!
E
We
don't
do
that.
Let's
go
ahead.
Obviously
it's
a
simple
policy:
you
could
make
that
more
complicated.
Maybe
that's
fine
in
some
namespaces
and
not
others.
Maybe
it's
fine.
If
you
have
a
and
you're
in
a
whitelist,
your
policies
could
be
arbitrarily
complex
because
Rio
is
a
like
a
language
for
doing
so
contests.
Then
just
gives
you
a
nice
CLI
tool
to
test
your
configs,
conveying
test
plenty
of
your
config
file
and
well
in
this
case
that
one
had
a
breach
of
policy.
Let's
see
that
in
process,
so.
E
I've
got
a
bunch
of
queue,
mates,
config
files,
but
I
wouldn't
look
at
those.
Let's
have
a
look
at
the
policies
because
they're
a
bit
more
interesting
and
I
have
the
deny
one
dimensioned
and
with
some
other
bits
and
pieces
here,
we've
got
another
common
policy
area
might
be.
Does
this
have
the
labels
that
we
require
for
all
of
our
containers
and
slight
change
here
as
well?
E
I've
started
building
up
a
library
of
helpers
for
the
committee
space,
but
we've
also
got
a
few
other
things
going
on
in
there
we've
got
actually
not
just
deny:
we've
got
warnings.
Maybe
there
are
some
things
we
want
to
say
you
should
know
about
this,
but
we
haven't
banned
it
quite
yet,
in
this
case,
I'm
warning,
if
you're
using
services
explicitly
again
simple
example,
but
you
can
do
all
the
truly
complex
ones.
E
One
of
the
other
interesting
things
about
OPA
generally,
is
that
it
has
an
in
built
testing
mechanism.
So
actually
you
can
write
tests
for
your
rego
code
in
rego
if
you're
thinking
this
has
got
meta
and
there
are
tests
for
the
policies
and
I'm
using
a
tool
to
test
my
configs.
You
would
be
correct
by
quite
like
that.
Again
it's
code,
you
should
write
tests
for
it
all
the
way
down.
E
So
actually
we
can
run
our
tests
and,
like
we
have
some
sense
of
yeah,
those
policies
have
not
got
bugs
in
so
then
we
can
come
test
test
something
and
we
get
a
bunch
of
bits.
Mack.
Thank
you.
I've
got
one
more
demo.
I
will
be
very
quick,
so
I
really
will
go
really
quickly
through
this
and
not
come
any
bit
and
come
to
my
talk
tomorrow
and
it's
not
just
kubernetes,
it's
any
structured
data.
We
can
test.
So
there's
a
bunch
of
service.
E
There's
a
bunch
of
terraform
examples,
there'll
be
as
many
examples
as
I
can
get
in.
I'll
do
one
more
demo
because
I
told
Christie,
oh
dear
I've
met.
Obviously
adding
this
to
a
CI
system
is
simple
enough
and
we've
been
looking
at
Tecton
today,
so
I
managed
to
get
a
Tecton
example
going.
So
we've
got
a
task
run
spec
here
it
will
grab
a
github
repository
which
has
some
conflict,
which
has
some
communities
configs
in
and
it
will
apply
some
policy
from
that
repo.
So.