►
From YouTube: SIG Security Meeting 2019-9-25
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
B
A
B
C
E
D
F
Don't
have
to
scribes
I'll
I
can
be
scribe
as
long
as
somebody
else
can
do
it,
while
because
I'm
also
going
to
talk
about
the
open
assessment,
so
do
you
need
somebody
else
is
willing
to
change.
Yes,.
A
A
C
G
F
C
G
D
F
Yeah,
so
JJ
did
that
last
time
we
wanted
to
iterate
on
the
format
a
bit
to
make
sure
that
like
have
a
little
more
lead
time.
So
thanks
for
mentioning
it
Amy
I
mentioned
it,
I
was
able
to
go
to
the
policy
team
meeting
in
the
afternoon
last
weekend
mentioned
it
to
Howard
and
empty
and
Erica
are
the
leads
of
that
team
so
where
it
would
be
great
if
we
could
try
to
get
that
PR
done
I'm,
mostly
talking
to
JJ
cuz
Howard,
something
not
available
this
time
zone.
A
D
A
F
I've
I
mentioned
that
the
syncing
up
with
the
policy
group
I
also
had
I've
been
catching
in
advance
of
iiw,
which
is
October
first
2/3.
If
anybody's
local
I'd
highly
recommend
that
it's
an
unconference
focused
on
identity,
it's
been
happening
for
30
years,
like
the
OAuth
standard
came
out
of
work
at
that
group
to
you
know,
get
everybody
to
stop
sharing
names
and
passwords
and
they're
doing
a
lot
of
really
interesting.
F
The
last
couple
have
really
a
lot
of
people
are
focusing
on
self
sovereign
identity,
which
is
pretty
interesting
to
track
so
catching
up
on
reading
I
still
been
to
verified
credentials,
which
are
a
new
w3l
ative
lee
new
w3c
standard
that
is
emerging
and
Howard,
actually
chimed
in
on
Twitter
and
asked
if
that
would
be
relevant
for
the
group.
So
since
I'm
learning
about
it,
I
thought
I
would
ask
other
people
here.
F
We
can
talk
about
it
later,
but
just
kind
of
want
to
put
it
out
there
that
I'd
be
up
for
seeing
if
I
could
get.
Somebody
from
that
effort
to
present
to
the
group
of
people
thought
it
was
relevant
and
interesting
and
and
then
my
other
update
is
also
on
the
agenda,
which
is
I've
been
helping.
Contribute
to
the
open
assessment
and
JJ's
asked
me
to
talk
about
it
a
bit
today.
A
A
A
E
And
I
don't
remember
seeing
it
and
any
of
the
guys
are
Doc's
that
we
have
as
the
sig
security
work
group,
is
working
on
assessments
and
evaluations
of
projects
and
those
documents
are
interact
but
available
to
the
group.
What
is
the
standard
practice
for
using
the
information
within
those
draft
documents.
A
F
What
is
that
re
is
the
question
so,
okay,
so.
E
For
when
six
security
is
doing,
address
is
doing
an
assessment
of
a
particular
project
and
we
have
the
draft
and
all
of
our
recommendations
and
commentary
and
updates
that
we're
posting
through
the
get
flow
is
publicly
available.
Anybody
can
really
go
in
and
see
the
PRS.
They
can
see
all
the
comments.
F
I
think
I
actually
think
it's
a
it's
a
I
think
it
would
be
an
important
to
have
a
caveat
like
I've,
just
been
kind
of
like
well.
It
goes
without
saying
that
all
this
stuff
is
unverified
until
we
all
approve
it.
But
if
somebody
dropped
in
from
you
know
wherever
and
wasn't
aware
of
it,
it
could
be
amplified
in
a
way
that
would
be
undesirable
or
creating
right,
and
so
we.
E
Get
access
to
interesting
information
about
some
of
these
projects.
That's
not
necessarily
publicly
available
until
we
start
interacting
with
them
and
actually
writing
it
down
and
creating
a
ticket
and
submitting
a
PR
on
it
and
like
how
do
we
provide
assurances
to
those
organizations
with
our
due
diligence
ourselves
as
well
as
outside
of
that
somebody
coming
across
it.
F
So
we
have
this
to
look
like
this
language
of
caveat.
A
the
whole
thing
right
that
it's
at
least
the
intent
of
this
description
was.
This
is
both
to
give
you
a
path
into
thinking
about
the
security
of
the
project,
not
replace
your
own
process
for
determining
whether
it's
a
fit
for
you
and
so
there's.
You
know,
there's
framing
the
assessment
in
general
right.
We
had
a
lot
of
discussion
early
on
that
we
didn't
want
these
assessments
to
be
approved.
F
If
we
don't
believe
that
they're
binary
and
that
you
know,
we've
been
careful
to
say
like
just
because
the
project
has
work
to
do
doesn't
mean
that's
a
negative
thing.
It's
in
fact
a
positive
outcome
of
the
process
and
so
forth,
and
so
on.
So
I
think
it
might
be
good
to
I.
Don't
know
reflect
on
this
bit
right
to
make
sure
that
we're
you
know.
F
E
I
might
like,
from
a
disclaimer
caveat
for
people
that
are
coming
to
the
repo
and
coming
across
this
information.
I
was
thinking,
maybe
a
lot
in
r2
and
the
readme
and
potentially
expanding
upon
the
Code
of
Conduct,
because
we
are
a
security
focus
special
interest
group
that,
above
and
beyond
the
normal,
humane
code
of
contact
because
caveat
being
in
there.
If
you're
a
member
of
this
group,
the
information
that
you
were
going
to
come
across
is
always
and
draft
and
not
to
be
considered,
actionable
or
taking
and
running.
G
E
E
A
A
C
F
C
B
C
A
If
you
can
find
to
the
specific
issue
and
talk
about
it,
it
will
be
good,
but
the
overall
stance,
the
overall
stance
on
that
is
essentially
cigars,
operates
on
its
own
influence
things.
We
got
this,
it's
not
the
goal
of
this
clue.
It's
rather
to
help
cigar
then
surfacing
what
the
issue
is
to
the
rest
of
the
rest
of
the
organ
community
right.
F
I
think
when
I
always
come
back
to
our
Turner
and
mission
right,
so
our
mission
is
to
reduce
the
risk
of
cloud
native
applications
expose
and
user
data
or
allow
other
unauthorized
access.
So
if
there
is
an
issue
right
in
the
world
in
any
of
the
product,
particularly
in
CN
CF
projects,
because
we're
part
of
the
CN
CF.
F
So
if
we,
if
one
of
our
projects
is
as
a
issue
that
is,
we
think,
is
risky
to
cloud
native
applications
in
the
ecosystem,
then
I
think
highlighting
our
concern,
like
we
have
a
forum
here
right
and
we
have
the
ability
to.
We
could
invite
Sagat
or
a
project
to
discuss
issue
that
we
consider
to
be
risky,
and
we
can
talk
about
why
we
consider
it
to
be
risky
and
what
mitigations
they
know
about
and
I
think
that
forum
creates
opportunity
for
action.
F
F
So
I
participated
in
this
security
assessment
for
those
of
you
who
are
might
not
be
following
the
details
here,
we're
on
our
second,
so
all
of
the
assessments
are
tagged
under
this
assessment
tag,
I
remove
the
is
open.
You
can
see
that
we
have
where,
in
the
we've
got
three
assessments
in
totem
is
completed,
open
policy
agent.
We
are
on
the
verge
of
completing
and
falco.
We
are
on
the
verge
of
starting,
so
our
goal
is
to
have
five
assessments
and
then
reflect
on
our
process.
F
Of
course,
if
anything
is
in
our
way,
we
can
update
our
process,
but
we
are,
you
know,
baby
steps
here,
we're
doing
our
second
assessment
and
then
we
want
to
talk
through
our
learnings,
but
not
you
know
deep
dive
too
much
in.
Maybe
we
should
do
X,
Y
or
Z.
We
just
capture
those,
and
so
we
also
have
another
label
for
the
assessment
process.
F
So
don't
check
this,
so
you
can
see
there's
a
lot
of
open
issues
right
that
are
like
if
you're
participating
in
the
assessments
or
observing
them,
or
you
know,
hearing
about
them
the
meetings
and
you're
like
wow.
They
should
really
do
XYZ.
You
can
look
at
everything,
labeled
assessment
cross
process,
and
this
is
the
time
for
us
to
be
capturing
what
we're,
learning
or
ideas
about
how
to
improve
the
process
and
then
we'll
review
all
of
these
issues
after
these
first
five
assessments
and
then
do
some
improvements.
F
F
So
this
is
the
assessment
we
OPA.
Some
of
you
may
recall.
We
had
a
presentation
by
OPA
some
time
ago
and
they
presented
their.
You
know
how
it
works.
It's
we
have
background
where
basically,
policy
is
a
big
part
of
security
right.
We
have
a
breakout
group
that
focuses
on
policy
and
in
order
to
say
that
you
have
a
secure
system,
you
need
to
make
sure
that
you
actually
have
some
policies
and
they're
being
followed.
F
Opa
is
a
project
that
helps
with
this
by
having
a
I'm
making
it
so
that
you
can
write
your
policies
in
this
Rako
language
and
then
validating
it
like
doing
the
policy
enforcement
and
implementing
those
controls
in
ways
that
are
like
can
be
machine.
You
can
reason
about
with
machines
code,
so
so
that's
kind
of
I
kind
of
went
through
the
summary,
but
I'll
go
through
this
now
a
little
bit
in
order.
We
have
this
maturity
section
which
is
kind
of
a
if
we
don't
quite
know
how
to
define
it.
F
F
Although
you
know
like
the
noting
that
they're,
primarily
from
styro,
because
there's
been
a
bunch
of
conversations
and
the
TOC
about
wanting
to
support
open
source
projects
that
are
primarily
one
company
to
have
enough
participation
from
the
community
that
that's
robust.
If
that
company
decides
to
do
other
things
right
getting
to
the
sustainability.
There's
a
little
outside
security,
but
it's
a
fourth
course
affected.
It
affected
it.
F
F
No
thanks
so
so
I
kind
of
went
a
little
bit
over
the
design.
I
think
the
key
takeaway
from
our
perspective
is
that
if
you
have
heterogeneous
infrastructure
or
a
high
rate
of
change,
where
lack
of
policy
enforcement
would
create
a
big
business
risk,
that's
when
the
added
overhead
of
implementing
SOPA
would
be
valuable.
F
So
this
is
a
common
situation
right
that
people
have
on
Previn
cloud
or
multiple
clouds
or
different
way
or
they're,
just
different
services
that
all
need
to
have
similar
or
the
same
policies
and
whenever
you,
what
we're
seeing
is
in
that
heterogeneous
infrastructure
that
presents
risks,
because
people
can't
reason
about
their
policies
or
know
that
they've
been
implemented
so
and
that's
sort
of
common
in
this
cloud
way,
and
so
the
the
you
know,
the
added
benefit
of
OPA
also
presents
risks
right.
So
it's
great.
We
have
this
policy
as
code.
F
Expressions
that
you
can,
you
know,
sit
around
you.
Can
it
implement
the
same
across
heterogeneous
systems
and
separate
your
security
code
from
your
absolution
application
code,
but
then
these
are
really
like
they.
It
requires
the
same
care
as
code
and
some
you
know
and
there's
concern
that
the
that
there
will
be
a
false
sense
of
security
just
because
you're
using
OPA.
So
we
you
know
a
lot
of
our
discussions
were
really
around.
How
do
we?
F
F
F
Well,
we
might
not
if
we
can
write
that
in
the
notes.
I
want
to
just
double
check
to
see
what
we
had
that
somewhere.
But
it's
a
good
question
from
what
I
remember,
I,
think
the
target
user
is
the
operator
or
the
developer
and
that
you
could
use
a
Oh
BOM
like
Netflix
uses,
open
and
they're,
not
a
platform
per
se,
I
mean
I,
don't
know,
maybe
they
have
api's,
but
it's
primarily
to
secure
it.
Nobody.
H
H
F
Sure
well,
that'd
be
fabulous
thanks
for
pointing
it
out
JJ,
so
yeah
I
wanted
to
have
everybody
in
the
use
cases
doc.
We
have
these
personas
that
are
different.
Users
are
operators,
administrators
developers,
end
users
and
platform
and
implementers
and
the
security
assessments
are
supposed
to
focus
who
uses
this
stuff,
and
so
so
we
should
make
sure
that
we
cover
that,
but
I
think
that's
interesting.
There
might
be
opportunity
for
looking
at
who's
using
OPA
to
find
some
of
those
platform.
Employers
we've
been
looking
for
every
question.
Yeah.
F
B
F
F
A
F
Okay,
so
so
generally,
it's
for
controlling
access
to
a
service
and
with
the
caveat
that
I
am
NOT,
an
OPA
person
I've
never
actually
used
this
technology
hands
on.
So
people
feel
free
to
correct
me.
I
Yeah
I
do
have
it
I'm
into
every
question,
possibly
with
the
oppa
I,
because
for
more
security
since
we're
the
security
working
group,
we
are
expanding
the
attack
surface,
meaning
you
know
the
open
ourselves
would
be
opening
up
for
some
vulnerability
in
being
attacked
and
the
policies
could
be
manipulated
and
I
was
wondering.
If
have
you
seen
anything
specific
as
to
what
may
be
the
preventative
measures
that
Papa
is
taking
or
recommending.
F
So
I
think
that
would
add
the
addition
of
any
part
of
your
system
right.
Your
if
you
add
anything,
you're
expanding
the
attack
surface,
but
then
you
have
to
think
about.
Like
is
the
issues
you're,
mitigating
bigger
right,
then,
what
you're
attacking
what
you're
fit,
what
you're
adding
and
so
that's
part
of
our
rec.
Our
analysis
where
you
shouldn't,
be,
you
probably
shouldn't
be
using
OPA
if
you
have
very
very
simple
policy
and
a
homogeneous
system
right
just
because
it
would
add
more
complexity
than
is
merited.
F
What
sort
of
like
our
analysis
and
to
answer
your
question,
though
we
have.
Basically,
we
went
through
this
process
of
kind
of
articulating
what
things
are
risky
right
and
that,
if
open
is
successfully
attacked
right.
This
is
your
point
of
policy,
and
that
is,
you
know,
pretty
risky,
and
so
we
went
through
there's
actually
like
a
lot
of
sharp
edges
around.
Have
you
set
up
oppa
correctly
and
are
you
managing
your
policies
effectively
because
Oprah
isn't
a
policy
management
system,
so
you
have
to
outside
of
oppa
figure
out
how
you're
going
to
distribute
your
policy?
F
F
F
What
we're
like
it
just
on
a
personal
note,
there
seems
to
be
this
sort
of
common
pattern
in
a
lot
of
the
exploits
I
read
about,
which
are
that
things
are
not
configured.
Things
are
people.
Things
are
not
a
configured
the
way
that
people
think
they're
kind
of
they
are
configured
right,
that
the
systems
become
so
complex.
People
have
so
many
VMs
and
services
running
that
it's
easy
for
something
to
not
be
secured
at
all
and
that
things
end
up
being
wide
open
on
the
internet
unintentionally
and
I.
F
Think
that
one
of
the
things
that
makes
me
interested
in
following
OPA
is
you
know,
that's
the
thing
that
you're
mitigating
is
the
sort
of
oh
oops
forgot
to
secure
that
right
forgot
to
update
this.
You
know
I,
have
to
update
my
policy
in
15
places
and
it's
different
formats
and
now
I'm.
Just
you
know
it's
too
easy
to
make
a
human
error
that
misses
those,
so
so
it
might
be
good
to
Kate
or
somebody
like
you
know
like
if,
in
reading
the
overview
like
do,
we
address
those
points.
F
A
I
I
think
it
is
also
called
out
on
the
review
in
terms
of
what
scope
both
of
us
and
what
exalts
and
what
it
doesn't,
and
there
is
some
call
out
there,
but
it
will
be
useful
to
chime
in
on
the
PR
and
I
an
add
to
that
as
well.
To
say
to
be
clear,
on
increase
in
attack,
surface
vs.,
scoping
it
down
to
you
thing
and
tooling,
to
mitigate
some
of
these.
F
This
is
there
so
all
of
the
OPA
this
is.
This
process
has
led
to
these
writing
up
or
fire
or
highlighting
open
issues
so
moat.
Many
of
these
open
issues
that
are
in
the
project
recommendations
came
out
of
the
review,
and
so
it's
the
reviews
really
owned
by
there's
a
self-assessment
where
Ash
is
he's
a
contributor
to
OPA
and
he
owns
like
getting
that
over
the
line
and
then
either
he
or
we
report
issues
into
OPA,
so
that
this
once
this
PR
is
in,
there
are
open
issues
tracking
everything
we
raised.
F
So
if
we
chime
in
we're
doing
sort
of
two
things,
one
is
producing
this
document,
which,
which
is
kind
of
like
anybody's
guide,
into
understanding
the
security
profile.
The
risk
profile
the
benefit
of
this
particular
project,
but
also
allowing
us
to
track
these
open
issues,
and
it's
our
sort
of
chatting
about.
Like
writing
these
issues
such
that.
F
If
there's
been
talk
that
we
want
to
re-review
these
assessments,
assessments
periodically,
like
maybe
annually
that
maybe
if
a
particular
project
hasn't
added
any
features
in
a
year
right
that
we
could
do
a
cursory
review
and
just
look
at
the
issues
and
do
a
quick
update
right.
Whereas
a
project
has
added
a
bunch
of
features
of
related
to
security,
then
maybe
we
would
do
a
full
assessment
and
so
we're
trying
to
like
sort
of
queue
this
up.
So
it's
easy
to
update.
I
F
Know
it's:
the
contributors
are
mostly
saira,
so
there's
basically
there's
a
chef
I,
just
looked
at
these
77
contributors
with
ash,
and
you
know
kind
of
like
look
through
the
top
contributors
and
I,
don't
remember
which
of
these
people
it
was,
but
there
was
somebody
in
these
top
four
that
was
from
chef,
which
seemed
to
me
to
be
a
good
sign
and
then
the
there's
somebody
from
Google
who's
pretty
far
down,
because
it's
mostly
spec
stuff.
Here
we
go.
F
A
confrontation
would
be
too
strong
a
word.
We
had
some
good
discussions
around
what
is
happening
now,
this
being
our
second
one
has
become
the
norm
which
is
like,
where
are
the
edges
of
opus
responsibilities
right,
particularly
around
regiĆ£o
usability
and
around
defaults
right?
So
it's
very
challenging
to
make
things
secure
by
default,
because
the
most
secure
thing
by
default
is
to
just
turn
off
access
completely
and
that's
not
useful,
and
so
how
do
you
make
it
less
likely
that
somebody
is
going
to
do
something
incorrect
because
they
don't
know
what
they're
doing
and
had?
F
And
so
we
talked
about
you
know
really.
I
ended
up
having
a
brainstorm,
because
the
like
aft
came
in
with
a
stance
which,
I
think
is
you
know,
sort
of
reasonable
from
their
perspective,
which
is
that?
Well,
you
know
we're
giving
you
a
sharp
knife.
People
need
to
do
these
different
things,
so
we
can't-
and
we
don't
know
what
your
policies
are.
So
what
you
know
there's
not
much
to
do
there
other
than,
and
you
know
initially
all
of
the
project
recommendations
were
turned
into
documentation,
improvements
right,
set.
A
A
F
F
These
are
really
starting
points
that
then
the
OPA
team
and
anybody
who
wants
to
get
involved,
can
you
know
sort
of
add
in
ideas
and
then
I'll
just
round
out
by
saying
talking
about
the
CN
CF
recommendations
similar
to
our
last
project
in
toto.
There
are
certain
things
that
the
project
is
not
well
positioned
to
do
so.
F
If
the
CN
CF
wants
to
support
this
project
more
having
a
study
of
user
practices
around
this,
you
know
whether
that
people
are
catching
common
patterns
and
then
also
learning
from
the
end-user
companies,
where
there
may
be
specific
integrations
that
should
be
higher
priority
based
on
what
people
are
doing
with
OPA.
That
would
be
more
impactful
than
maybe
other
things
that
they
might
do
that
we
ate
that
we
don't
have
visibility
into
what
the
CN
CF
end-user
companies
make.
So
that's.
A
C
F
That
might
be
also
an
interesting
thing
too
I,
don't
know
part
of
the
reflection
about
you
know
what
do
we
really
learn?
What
was
you
know,
what
were
the
things
that
were
maybe
unexpected
by
either
the
product
team
or
us
I
like
that
way
of
thinking
about
review
of
these
security
assessments.
A
Yeah
those
are
good
inputs,
so
we
have
seven
more
minutes
any
thoughts,
questions
on
the
process
itself,
the
project
and
again
I-
think.
The
reason
reason
for
bringing
this
up
in
this
forum
is
the
PR
is
open
and
I'll
be
I'll,
be
doing
a
review
of
that.
But
anybody
who
wants
to
chime
in
and
add
any
comments
on
that
that
needs
to
be
considered.
Then
it
will
be
very
helpful.
A
H
H
H
Around
81,
and
so
now
that
we
have
it
allowed,
we
feel
like
we'll,
be
able
to
push
even
more
on
registrations.
I
think
we
were
thinking
about
a
hundred
and
fifty
cabinet
somewhere
around
then
that's
kind
of
the
next
thing
we
need
to
start
figuring
out
is
how
much
room
we
have
and
then
what
we
can
effectively
do
with
the
space
that
we
have.
So
that's,
probably
the
next
priority
that
we're
going
to
be
working
on
on
our
weekly
calls.
G
H
H
So
we're
excited
for
that.
We've
made
lots
of
good
progress,
which
have
been
some
very
cool,
interesting
features,
one
of
which
is
we're
actually
probably
merging
some
of
this
code
in
today,
around
gr
PC
based
outputs.
So
this
is
kind
of
in.
One
of
our
sticking
points
is
that
a
lot
of
our
outputs
and
alerts
have
been
kind
of
done
in
a
more
synchronous
fashion
and
with
G
RPC.
H
It
allows
us
to
kind
of
offload
the
alerting
engine
from
the
main
taco
engine,
and
then
we
can
have
subscribers
that
are
written
in
whatever
language
people
prefer
and
then
those
subscribers
can
then
for
the
events
and
alerts
into
when
whatever
system
like
elastic,
search
or
Kafka
or
or
whatever
it
might
be.
But
having
this
kind
of
gr
gr,
PC
based
streaming
service
is
going
to
be
really
beneficial
to
the
project.
H
I
was
running
some
numbers
just
around
like
how
we've
been
performing
sandbox
in
the
sandbox
versus
pre
sandbox,
and
one
of
the
interesting
metrics
I
saw
was
the
force
sandbox.
We
had
about
34
daily,
active
users
in
our
slack
channel
after
sandbox
we
have
about
104
daily,
active
users
in
the
slack
team
content
channel
and
then
a
weekly
active
user
perspective.
It
went
from
like
60
to
200,
so
the
community
is
really
thriving.
We've
got
a
lot
of
activity.