►
From YouTube: Kubernetes SIG Security 20210923
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
well,
hello,
everyone!
It
is
good
to
see
everyone
again
welcome
again
to
to
kubernetes
sig
security.
It
looks
like
we
have
a
bunch
of
a
bunch
of
subgroup
updates,
so
ray
tell.
B
Us
a
little
bit
about
what's
going
on.
Please
all
right
mine
will
be
quick,
so
we
will
start
to
move
some
artifacts
to
the
new
repo,
that's
more
in
the
discussion
section
later
and
that's
as
far
as
I
could
announced
in
terms
of
progress.
A
C
Hey
yeah,
hey
all
so
very
quickly.
We
did.
We
did
the
brainstorming
document
for
the
admission
controller
threat
model.
We've
got
lots
of
good
input
and
one
of
the
things
I
just
wanted
to
try
out
just
as
an
idea
is,
there
is
a
thing
called
deciduous,
which
is
a
kind
of
attack,
tree
creation
thing
where
you
put
in
markdown
and
it
creates
a
lovely
like
a
factory.
So
in
that
just
there
there
is
some
nicely
rearranged.
C
They
got
someone
rearranged
by
very
messy
in
their
initial
version,
which
I
could
very
quickly.
If
you
try
basically
I'll
find
the
screen
share,
I
can
show
it
well.
Let
me
do
that.
C
You
cool
so
now.
I
can
just
do
that
and
do
that
so,
as
you
can
see
what
it
does
is
it
essentially
creates
like
a
little
attacker
and
you
just
put
in
markdown
so
over
here
on
the
left
side,
you
basically
just
put
in
markdown
for
the
attacks
and
the
mitigations
and
the
goals
and
it
creates
the
attackery.
So
it's
just
an
idea
like
try
and
help
us
think
about
it.
C
I
like
how
we're
going
to
like,
like
lay
out
the
document,
but
I
just
liked
it
in
terms
of
like
giving
us
a
visual
view
of
what
that
looks
like
actually
a
great
fan
of
this
as
a
like,
a
way
of
like
creating
a
factories,
because,
basically,
you
just
write
mark
down
and
then
paste
it
in.
C
So
if
anyone
wants
to
try
that
out,
I've
put
the
links
to
both
the
markdown
and
also
the
deciduous
app
in
there
so
feel
free
to
do
that
and
then
the
next
step
will
be
we'll
start
drafting
the
document.
It's
a
bit
busy
at
the
moment
from
the
enemy,
because
it's
kind
of
gone
but
hopefully
we'll
get
into
like
the
next
steps.
If
anyone's
got
any
feedback
on
this
or
any
additions
or
changes,
absolutely
very
interesting
feedback,
just
add
another
victim.
Just.
A
I've
I've
tried
to
write
nice,
looking
a
tech
trees
using
dot
like
writing
the
dot
myself
and
it's
better
than
nothing,
but
it's
hard
yeah
and
this
one
used.
C
A
That's
me
yank
did
you
want
to
say
something
about
the
blog
post?
That's
not
me,
I
think
that's
pushback.
D
Hi,
everyone
hope
everyone's
doing
well.
So
this
is
a
collective
output
of
gym
angels
postcards
and
myself
effort.
We
worked
on
writing
a
good
review
of
the
nsa
cisa,
hardening
guide
blog
post
in
extra
thanks
to
rory
for
adding
comments
for
the
initial
draft
we
will
be
creating
pushkar
is
gonna,
create
a
draft
pr
soon
and
once
we
have
it
we'll
link
it
here
and
post
it
in
the
security
channel.
D
So
if
all
of
you
can
take
a
look
at
it
and
add
your
comments
and
feedback
they'll
be
great.
A
B
Is
there
any
target
published
dates
for
that.
D
I
think
I
want
to
check
check
him
out
the
sig
docs
blog
for
to
know
like
what
what
their
lineup
look
like
looks
like.
I
think
we
missed
a
window
for
like
september
13,
that's
what
we
were
targeting
and
unfortunately
we
couldn't
make
it
into
other
commitments,
so
I
will
cooking
with
them
and
I
will
try.
I
am
thinking
if
we
could
get
it
before
cube,
gonna
be
nice,
but
if
everyone's
busy
and
that's
okay
too
so
target
would
be
like
before
kubecon.
If
we
could
get
it
yeah.
D
Okay,
I
will
definitely
create
a
request
or
reach
out
to
one
of
you.
I
know
rey
is
wearing
multiple
hats
now.
He
is
also
participating
a
lot
in
sick
dog,
so
probably
I'll
just
reach
out
to
ray
and
later
and
then
get
the
thing
scheduled
so
that
we
have
some
data
and
we
can
time
box
to
review
timelines
and
feedback
and
incorporate
it.
E
Ray
one
question:
when
I'm
creating
the
pr
generally,
they
ask
for
a
date
if
I'm
not
wrong
in
the
title
of
the
file
and
at
the
metadata
part.
Also,
is
it
better
if
I
propose
a
date
sometime
after
27
september,
like
let's
say
first
week
of
october,
and
then
if
that
works
you,
then
we
don't
have
to
make
any
changes
and
if
not,
you
and
others
can
suggest
it
as
the
as
part
of
pr
review.
B
Yeah
that
that
works,
okay,.
E
A
All
right,
pushkar,
you
wanna,
you
wanna
tell
us
about
that
about
that
learning
session.
E
Yes,
I'm
so
glad
adolfo
could
join
us
on
tuesday
a
couple
of
days
back,
so
he
did
a
good,
great
learning
session
on
how
we
use
s
forms
for
kubernetes
in
case
you
missed
it
there.
If
you
don't,
have
to
worry
much
because
we
have
a
recording
and
he
is
also
going
to
do
a
similar
kubecon
talk
soon
in
three
weeks
from
now.
So
definitely
check
that
out.
E
I
think
this
was
a
good
p
in
I
mean
introduction
for
me,
specif
personally
in
terms
of
s-bombs
and
looking
at
an
example
of
kubernetes
and
how
it's
used
kind
of
made,
it
very
obvious.
There
were
about
12
attendees
at
the
top
of
the
meeting
and
yeah.
This
seems
like
was
helpful
for
folks.
We
got
some
good
feedback
requests
for
recording.
So
we'll
continue
to
do
these
kind
of
things
every
month
generally
third
week
of
third
week
of
the
month
on
tuesdays.
E
Oh
yeah
and
one
more
thing:
we
got
some
good
pointers
from
adolfo
about
where
we
can
start
helping
him
out
in
terms
of
making
this
better.
So
all
the
people
who
want
to
do
code
contributions,
talks
contributions.
We
have
another
opportunity
to
create
impact
and
help
others
achieve
their
goals.
A
Was
just
here
that
I
was
just
gonna
address
the
the
next
one,
which
is
on
the
subject
of
of
helping
others
achieve
their
goals
we
have.
We
have
have
been
discussing
this
in
the
src
and
I'll
make
sure
that
we
have
an
official
response
from
the
src
into
that
thread
this
afternoon.
Thank
you
very
much
for
your
patience.
E
A
Awesome
security,
self
assessments
who
would
like
to.
E
Yeah
this
probably
is
me
again.
I
think,
though
this
is
more
or
less
no
action
required
anymore,
because
I
got
a
couple
of
feedback
items
from
rory
and
vinayak,
so
those,
I
think,
are
valuable
suggestions,
so
I'll
work
on
fixing
those-
and
I
think
after
that,
because
of
the
way
owners
is
set
up
on
the
new
repo.
We
might
need
a
proof
from
tabby
or.
A
Ian
yeah
and
well,
a
thing
that
has
been
on
my
to-do
list
has
been
to
get
a
prn
that
creates
all
of
the
appropriate
subdirectories
with
the
owners
files
into
it
in
order
to,
in
order
to
unblock
folks
from
being
able
to
merge
into
their
own
areas.
There
yeah.
E
F
Sure,
absolutely
let
me
pull
all
of
this
up,
enjoy
a
full
screen
sharing
and
window
sharing.
F
That's
good,
okay,
yeah,
thanks
for
having
me,
I
joined
a
couple
here
a
month
in
the
meantime
ago,
talking
about
how
we
are
having
a
closer
look
at
the
kubernetes
control
plane.
With
regard
to
multi
multi-tenancy,
I
mean
talking
about
kubernetes
multi-tenancy,
a
lot
of
a
lot
of
publications
out
there.
F
We
have
a
good
understanding,
what's
typically
possible,
but
we
were
actually
more
interested
in
the
control
plane
aspects
of
kubernetes
multi-tenancy
and
that
from
a
security
perspective,
so
I
will
just
dive
a
little
bit
into
why
that
was
interesting
for
us
and
what
our
scenario
basically
was.
So
we
had
a
work
project
going
on
where
it
was
interesting
to
evaluate
whether
we
could
achieve
stronger
isolation
between
tenants
by
assigning
tenants
to
dedicated
worker
nodes.
F
So
we
don't
have
multiple
ports
from
different
tenants
on
one
node,
but
it's
that
only
parts
by
one
tenant
on
one
node.
So
that
was
the
thing
that
we
were
thinking
about,
and
in
doing
so
we
were
also
assuming
that
someone
would
be
able
to
break
out
of
a
container.
So
that's
also
what
I
meant
to
say
before.
F
Okay,
there's
a
lot
of
data
out
there
on
container
runtime,
hardening
containment,
runtime
isolation
and
what
to
do
there
all
good,
but
as
a
additional
line
of
defense,
we
wanted
to
see
how
well
the
kubernetes
control
plane
is
holding
up.
If
someone
were
to
compromise
a
worker,
node,
really
full
administrative
privileges
on
a
worker,
node
and
all
the
impact
that
comes
from
that,
while
digging
around
there
was
also
a
statement
in
the
2019
security
review.
F
That
basically
said
like
hey
it
didn't
look
too
bad,
but
for
true
multi-tenant
situations
we
probably
want
to
have
a
closer
look
at
the
kubernetes
control
plane.
So
that's
exactly
what
we
decided
to
do
and
basically
yeah.
As
I
said,
we
wanted
to
look
at
node
based
tenant
isolation
that
an
attacker
was
tenant.
Access
breaks
out
of
a
container
has
root
access
on
the
box
and
they
attempt
lateral
movement
from
this
position.
F
What
we
did
was
basically
perform
a
threat
model.
We
wanted
to
keep
that
short.
There
was
already
lots
of
data
out
there.
We
dug
a
little
bit
into
the
details
of
the
different
control,
plane,
components
and
what
which
one
would
be
the
most
interesting
one
to
analyze
and
yeah
basically
performed
a
vulnerability
analysis
from
there.
F
So
one
one
early
step
we
had.
We
were
also
evaluating
whether
it
would
actually
be
possible
to
implement
something
like
that.
So
there's
a
little
bit
of
a
of
a
side
note,
so
there
is
various
implementation
options
out
there.
If
you
want
to
assign
specific
tenants
to
specific
nodes,
there
are
more
details
about
that
in
the
blog
post
that
we
wrote
about
that,
and
we
also
have
code
examples
and
I
can
post
links
to
that
in
the
chat
later.
F
But
one
very
important
question
is
also
whether
your
tenants
will
have
kubernetes
api
access,
which
makes
this
whole
idea
way
more
difficult
than
if
you
would
have
a
custom
control
plane
in
front
of
the
actual
humanities
control
plane
or
a
custom
api
that
abstracts
things
away
a
little
bit
and
you
basically
just
orchestrate
the
kubernetes
orchestrate
kubernetes
as
your
personal
runtime,
without
giving
tenants
any
access
to
that.
F
So
what
we
did,
we
are
performing
a
threat
model
exercise
together
with
our
traders
partners,
so
they
like
were
looking
for
four
weeks
for
five
weeks
at
kubernetes,
where
we
were
also
digging
at
a
couple
of
things
and
yeah.
We
looked
at
a
couple
more
details
of
the
control
plane.
F
We
also
wanted
to
keep
it
strictly
focused
on
the
kubernetes
control
plane.
So
we
we
put
a
couple
of
things
out
of
scope.
For
example,
overlay
networking
was
out
of
scope
for
the
vulnerability
assessment.
I
I
mentioned
a
little
bit
about
that
in
the
blog
post
as
well.
That
is,
would
be
a
very
relevant
thing
to
keep
in
mind
as
well.
F
At
cd
was
out
of
scope
and
we
kept
core
dns
out
of
scope
with
regard
to
implementation
aspects.
So
no
bugs
in
the
code
were
interesting
to
us
if
we
would
have
found
any
bad
misconfiguration
or
something
like
that,
we
would
have
mentioned
that,
but
we
wanted
to
focus
on
really
like
implementation
vulnerabilities
in
the
kubernetes
control
plane
to
see
whether
it
would
actually
be
possible
to
rely
on
it
as
an
additional
isolation
mechanism.
F
I
don't
think
I'm
telling
anybody
anything
new
here,
but
this
is
just
to
illustrate
a
little
bit
that
we
also
performed
a
tech.
Modeling
exercise
threat,
modeling
exercise
to
make
sure
that
if
anybody
wants
to
dive
deeper
or
like
perform
additional
vulnerability
assessments
on
the
kubernetes
control
plane
that
you
that
we
documented
what
we
did
so
far,
and
even
though
there
were
no
findings
for
us.
F
So
yeah.
The
main
idea
was
also
to
make
this
available
to
the
community
that
people
can
dive
deeper
into
that
in
the
future.
I
already
mentioned
it
in
the
very
beginning.
There
were
no
relevant
findings
in
the
full
report.
We
have
a
couple
considerations
in
there
if
you
would
attempt
such
a
note
based
multi-tenancy
model,
to
keep
in
mind
but,
as
I
said,
nothing
crazy
going
on
in
there.
F
F
If
you
would,
if
you
were
to
compromise
one
of
the
worker
nodes,
simply
based
on
the
way,
calico
is
organizing
routes
across
the
cluster,
since
basically
any
node
in
there
is
a
fully
authorized
bgp
peer,
so
that
means
you're
in
a
privileged
bgp
situation
and
can
reroute
traffic.
So
this
is
just
one
plugin
that
I
looked
at.
F
It
really
depends
like
how
other
plugins
are
handling
their
the
routing
alignment,
but
this
is
definitely
something
very
important
to
keep
in
mind
if
you
would
want
to
rely
on
the
control
plane
or
like
on
the
control
plane
to
protect
against
the
attack
scenario
that
we
had
yeah.
We
also
wanted
to
keep
it
as
general
as
possible,
so
we
basically
said:
okay,
we
use
the
at
the
time
most
recent
kubernetes
released
release
that
is
deployed
via
cube
admin.
F
F
Yeah,
that's
that's
a
quick
overview
of
what
we
did.
I
think
in
in
some
we
had
a
a
couple
person
months
worth
of
effort
looking
at
the
kubernetes
control
plane
without
any
findings
which
is
a
bit
frustrating,
but
at
least
the
team
now
knows
their
way
around
kubernetes
internals
way,
better.
C
I
think
the
breadth
model
was
great,
though
it
really
laid
out
like
a
lot
of
the
attack
paths
and
impacts
and
mitigation.
Just
having
that,
like
spell
out,
really
is
really
useful
with
an
artifact
like
that
going
forward.
So
you
can
say
these
are
the
things
we've
considered
and
then,
like
you
said,
there's
other
things
that
haven't
been
that
people
say.
Okay
about
that.
So
I
thought
that
was
really.
I
thought
personally,
that
was
a
really
great
report.
C
F
C
F
Otherwise
way
too
much
redundant
work
going
on
out
there
in
the
world.
I
don't
think
we
need
that
yeah.
A
A
F
F
Yeah,
any
any
questions
come
up
any
more
ideas.
There
were
already
a
couple
twitter
conversations
going
on
like
which,
which
other
aspects
of
the
control
plane
would
be
interesting
to
look
at
or
which
other
approaches
to
take
yeah
ping
me.
I
think
everything
that
we
have
should
be
shared.
A
All
right,
moving
on
from
that,
the
the
repo
exists
now
and
so
pushkar.
I
think
you
wanted
to
start
a
conversation
about
what
now
now
that
we
have
it,
what
what
can
we
do
with
it
like?
How
do
we
get
the
things
into
it
that
we
wanted
to
get
into
it.
E
Yes,
yes,
so
I
think
we
have
four
directories
in
k.
Community
six
security
today-
and
we
have
charter
and
read
me
in
in
the
same
place
from
my
understanding
charter
and
read
me
of
the
group
yeah
well
read
me
is
at
least
auto
created,
and
the
charter
is
something
I've
seen
for
other
six
also
continues
to
be
in
k
community.
E
So
it
seems
like
those
two
except
those
two
everything
else
could
potentially
be
moved
to
the
new
repo,
especially
the
four
direct
three
directories.
Well,
we
have
two
for
audit
and
one
for
docs
one
for
tooling
and
we
could
potentially
have
one
for
assessments
in
the
new
repo
when
we
have
one
or
more
of
those.
E
A
I
mean,
I
think
it
I
think
it
sounds.
I
think
it
sounds
like
a
like
a
fine
plan.
The
one
thing
that
comes
to
mind,
for
me
immediately
is
the
moving
of
the
artifacts
from
the
security
audit
in
2019.
A
We
moved
those
once
before,
and
I
think
that
I
think
that
there
were
some
comments
on
some
issues
that
pointed
out
certain
places
where
there
were
other
links
to
it.
But
since
that
has
already
happened,
we
can
use
the
lessons
from
that,
so
that
when
we
move
it
again,
we
can
do
so
with
with
better
preparation.
A
Yeah,
I
think
that's,
I
think,
that's
brilliant,
and
so,
if
somebody
wants
to
like
one
way,
somebody
could
put
in
a
big
pr
to
move
things
or
or
alternately
folks
could
put
in
pr's
for
the
particular
bits
that
that
are
most
interesting
to
them.
Like.
H
A
One
way,
or
one
way
or
the
other
I
like
the
idea
of
since
since
owner's
files
cascade.
I
like
the
idea
of
having
some
top
level
directories
for
the
various
the
various
sub
projects,
with
the
owner's
files
in
there
then
giving
approval
within
those
sub-projects
to
the
folks
who
are
leading
those
sub-projects
with
whatever
reviewers
you'll
find
necessary
for
your
needs,
but
yeah.
I
think
it's
great.
E
Yeah,
okay,
cool-
I
I
can.
I
also
wanted
to
suggest
this
as
a
good
first
issue
for
any
new
contributor
who
wants
to
start
helping
us
out
and
the
way
potentially
I
can
help
is
create
an
issue
with
exact
steps
of
what
we
need
to
move
to
where
and
if
you
follow
that
issue,
you
will
be
able
to
create
a
pr
that
sort
of
reflects
what
we
want
to
do.
So
if
anyone
is
on
the
call
and
wanted
to
always
contribute
that
didn't
know
where
to
start
happy
to
work
with
you
and
let
me
know.
A
Thank
you
so
much
for
that.
I
I
love
that
idea,
because
I
feel,
like
some
of
us,
spend
a
lot
of
time
worrying
about
the
technical
complexity
of
kubernetes,
which
is
clearly
significant,
but
also
the
the
complexity
of
the
of
the
community
machinery
like
how
do
we
deal
with
pull
requests
within
kubernetes
and
and
all
of
those
sorts
of
things
you
have
to
you
have
to
learn
those
things
too
have
to
practice
those
things
too.
So,
thank
you.
E
Yeah-
and
I
mean
if
anyone
is
new
to
the
project-
this
is
all
kind
of
new
to
me
too,
even
though
kubernetes
is
very
familiar
for
me
for
about
four
years
now
so
yeah
it.
It's
not
easy,
but
it's
also
not
so
difficult
that
you
get.
You
can
get
intimidated,
so
I've
been
just
trying
to
get
head
on
into
it
and
we
can
make
mistakes
and
then
come
back
from
it.
Learn
from
it
and
continue
to
improve.
So
don't
be
afraid
if
this
is
your
even
first
pr
in
the
entire
kubernetes
org.
A
We
all
we
all,
we
all
have
your
back.
That's
a
nice
thing
about
the
way
that
reviews
work
in
in
kubernetes
is
everybody
wants
to
to
see
you
be
successful,
and
sometimes
that
means
giving
high
fives,
and
sometimes
that
means
giving
suggestions
like
it'd
be
better
to
do
it
this
way,
but
always
with
caring.
A
I
appreciate
the
work
that
that
has
been
going
into
the
ambient
capabilities
cap
and
the
improvements
that
you've
made
to
it.
Would
you
would
you
like
to
share
with
us
where,
where
we
are
now
and
and
what
we
can,
what
we
can
do
to
help
that
go
forward.
I
Oh
yeah,
hey
hello,
everyone!
It's
been
a
while
since
I
joined
one
of
these,
but
so
I
think
there
was
some
discussion
last
meeting
about
some
suggestions
about
like
what
people
would
like
to
see
in
the
gap,
and
so
I
read
the
meeting
notes
and
based
on
that.
I've
been
adding
like
what
we'd
like
to
see
for
alpha
and
like
what
alpha
kind
of
is
all
about
and
what
are
the
things
that
need
to
be
decided
before.
I
We
can
go
to
alpha
and
merge
it
as
like
implementable
for
alpha.
So
I've
added
like
those
details
in
the
gap
itself
and
tried
to
capture
them
and
also
any
comments
and
like
good
suggestions
that
were
are
good.
That
were
in
the
kept
comments
like
for
the
pr
I've
tried
to
capture
those
summarize
and
capture
those
as
well
in
the
relevant
sections.
That's
pretty
much
it
like.
I
What
I
was
thinking
is
most
of
alpha
would
be
like
setting
up
the
infrastructure
for
like
and
getting
the
code
in
for,
like
various
parts,
because
there's
like
a
cri
part,
there's
like
a
container
d
and
crio
part
and
then
probably
updating
the
data
api.
If
we
go
that
way,
and
then
data
would
be
more
about
like
hey,
how
do
we
not
let
this
become
like
a
way
to,
like
just
add
any
capability
you
want
and
like
kind
of
structure
it
and
like
limit
what
what
can
be
done
in
a
way?
I
That's
useful
for
most
non-root
containers
to
to
do
what
they're
trying
to
do
in
the
cluster
while
not
become
a
way
to
like
easily
escalate,
and
that
would
be
like
the
scope
for
beta
so
like
in
alpha.
We'll
probably
need
to
clarify
that
hey,
don't
take
a
dependency
on
this
for
like
adding
some
capability,
because
we
will
like
we
are
in
the
process
of
auditing
what
capabilities
we
should
allow
through
ambient
through
this
new
field.
I
If
we
choose
to
go
through
the
field,
and
then
then
we
can
have
that
discussion,
kind
of
just
focus
on
like
hey.
What's
the
what's
the
minimum
set
of
capabilities?
How
do
we
integrate
this
with
pod
security
and
which
settings
are
part
security
affected
in
which
way?
I
And
that's
that
I
wanted
to
keep
all
scope
for
beta
because,
like
if
you
have
that
conversation
with
like
how
the
infrastructure
should
set
up
for
to
make
this
all
work,
it
kind
of
just
all
becomes
a
little
too
overwhelming
in
the
comments
to
to
follow
so
that's
the
scope
and
then
ga
would
be
more
focused
on
like
bugs
improvements
and
and
and
like
really
locking
it
down
and
then
turning
the
feature
gate
on
and
off
us.
I
A
Yeah
I
just
I
just
I
just
took
a
look
at
it
and
yeah
right
now.
The
current
status
in
caps.yaml
is
provisional,
so
yep
are.
A
A
To
get
anybody
who
has
been
following
along
on
this
or
who
has,
you
know,
fought
with
fought
with
unprivileged
containers
that
need
to
do
certain
slightly
privileged
things
to
to
have
a
look
at
that
and
and
make
sure
that
that
the
things
that
the
questions
that
need
to
be
answered
to
be
able
to
go
to
alpha
are
reflected
in
the
dock.
A
I
mean
I
for
one
will
we'll
try
to
have
a
look
at
that
pretty
quick,
so
that
so
that
we
can
either
get
it
merged
or
or
get
some
feedback
in
there
and
obviously
clearly
welcome
the
rest
of
the
thoughts
of
the
folks
in
kubernetes
security.
Community.
A
B
Yeah
so
as
part
of
releases,
there
is
a
batch
of
feature
blogs
with
new
or
update
enhancements
for
release.
1.23
is
coming
out
in
december.
I've
highlighted
some
possible
feature
blogs
that
this
sig
and
in
cooperation
with
with
with
other
sigs
as
well,
that
might
be
good
feature
blocks
in
the
feature
blog
process.
We
do
have
to
opt
in
into
what
is
going
to
be
written
by
november,
2nd
the
first
one,
and
this
is
actually
from
a
request.
B
Since
the
pod
security
admission
is
going
to
it's
pretty
much
going
to
be
turned
on
by
after
123.
So
it's
going
to
be
on
so
I
know
we
did
have
feature
blog
initially
for
122..
I
think
it'll
be
good
to
just
have
another
feature,
blog
to
say:
hey.
This
will
be
on
and
a
little
more
details
on
how
to
use
it.
I
know
duffy
has
a
good
video
on
how
to
on
how
to
use
it
on
cloud
native
tv.
B
Second
one
as
well
as
is
a
defend
against
logging
secrets
via
static
analysis.
That's
going
to
stable
that's
with
sig
instrumentation,
so
I've
listed
out
like
the
the
other
sig
as
well,
that
that's
been
this
co-working
on
this
with
six
security,
so
those
are
two
possible
ones.
This
is
just
a
call
to
for
future
blogs,
and
I
put
the
link
in
how
where
to
opt
in
only
leads
can
opt
in
a
feature
blog.
So
that's
pretty
much
all.
I
have
any
comments
or
questions.
H
E
B
Yeah
so
there's
a
google
group
called
leads
at
kubernetes.
Dot,
io
and
part
of
that
is,
is
all
the
the
sick
chairs
and
I
believe
the
tech
leads
as
well
and
those
and
that's
all
manage
it
through
in
the
case,
slash
ktop,
dot
io,
I
think,
there's
a
yaml
file
that
manages
all
those
groups,
and
so
so
the
folks
in
that
in
that
group
can
so
this.
B
The
opt-in,
doc
is
a
google
sheets,
and
so
when
the
the
the
rights
given
to
edit
is
given
just
to
the
leads
at
kubernetes
dot
io
and
to
folks
that
manage
the
feature
box
for
123..
A
So
so
what
that
means
is
tap
tap
mirror
in
on
the
shoulder,
and
we
will
go
and
put
your
name
in
the
hat
like
that
is,
that
is,
that
is
one
small
surface
that
we
are
proud
to
offer.
You.
B
E
B
And
we
don't
have
any
future
blogs
often
yet,
but
I
do
want
to
get
that
this
train
of
thought
in
people's
minds
of
just
starting
like
what
it's.
What
can
we
highlight
in
123,
like
the
two
that
came
up
to
mind
for
this
egg?
Is
the
pot
security
mission
to
beta
and
defend
against
logging
secrets
via
static
analysis.
A
Thank
you
so
much
for
bringing
brainstorming
to
this,
because
you
know
seeing
feature
blog
request.
Deadline
is
november.
2Nd
is
one
thing
but
like
here
are
some
things
that
people
might
want
to
read
a
feature
blog
about
that's
that
is
that
is
super
helpful
and
also
with
the
five
weeks
of
of
notice,
like
I'm
imagining
the
possibility
of
having
a
blog
post
written,
not
at
the
last
minute,
and
that
sounds
like
that
sounds
dreamy.
B
And
this
is
only
just
to
opt
in
by
the
way
that
that
november
second
deadline,
it
could
usually
future
blogs
are
released
in
december.
So
you
actually
have
till
early
december.
A
B
A
Any
other
any
other
thoughts
about
feature
blog,
like
somebody
wants
to
jump
right
on
it
right
now.
That's
okay,
but
also,
if,
if
you
don't
that's
fine
too.
A
All
right
zavita:
can
you
tell
us
about
this
paper
review.
D
So
this
is
purely
informational,
and
I
also
thought
this
might
be
an
awesome
opportunity
to
collaborate
with
cncf
tag
security.
They
are
actually
working
on
a
kubernetes
policy
management
white
paper
and
it
is
out
for
review
open
for
review
until
october
21st
of
this
year.
So
I
thought
I'd
just
post
it
here,
whoever
is
interested
and
if
you
want
to
check
it
out,
I
also
have
a
link
for
more
info.
They
have
links
to
slack
channels,
working
groups
and
more
details
if
you
want
to
just
join
and
check
it
out.
D
A
Thank
you
for
bringing
this
over.
I
see
many.
I
I
see
many
different
colored
insertion
points
on
the
link
there,
so
I
think
that
I
think
that
you
have
gotten
several
folks
to
look
at
it
presently,
adam
inline,
csi
drivers
like
this
is
this
is
this
is
a
fun
one.
H
Yeah,
so
this
started
as
a
slack
thread
in
I
think
it
was
security
docs
about
two
weeks
ago.
My
team
is
exploring
actually
writing
a
csi
driver
which
can
be
used
to
create
these
inline
ephemeral
volumes,
which
is
it's
an
interesting
corner
of
the
csi
specification
and
the
kubernetes
volume
spec.
So
generally,
what
you
do
is
in
a
pod.
You
create
a
volume
of
the
type
csi
with
that.
H
You
then
specify
a
particular
csi
driver
that
has
been
installed
on
the
cluster,
and
then
you
are
free
to
provide
basically
any
configuration
directly
to
the
csi
driver,
and
then
the
csi
driver
creates
an
ephemeral
volume
on
the
node
that
is
available
to
the
pod.
H
Now
one
of
the
things
as
we've
been
exploring.
This
is
that,
with
the
current
pod
security
standards,
csi
volumes
are
allowed
for
restricted
profiles,
and
one
of
our
concerns
is
that
when
we
actually
go
to
deliver
this,
that
cluster
admins
may
not
want
to
be
able
to
take
a
sort
of
all
or
nothing
approach
when
they
are
adding
csi
drivers
to
the
cluster
that
cluster
admins
may
want
to
be
able
to
say
these
csi
drivers.
These
are
safe
for
restricted
some
other
ones,
that
other
teams
may
be
experimenting
with
a
little
bit
more.
H
They
can
might
only
be
allowed
for,
say,
baseline
or
even
privileged,
if
they're
being
that
quote-unquote
paranoid.
So
I'm
wondering
if
this
is
something
that
security
would
be
interested
in
working
towards,
maybe
providing
first
class
support
for
this,
probably
through
a
cap
right
now.
The
advice
is
that
if
you
want
to
add
additional
controls,
your
need
to
add
third
party
admission
control.
A
In
a
case
like
that,
being
able
to
specify
that
field
basically
means
that
you
can
mount
anything
that
anybody
else
has
already
made
into
your
pod
anytime.
You
want,
and
that
sounds
frightening,
and
that
seems
like
the
sort
of
that
seems
like
the
sort
of
use
case
that
has
resulted
in
things
like
the
fiber
channel
csi
drive.
A
I
mean
if
the
fiber
channel
built-in
driver
not
being
allowed
in
those
more
restrictive
levels,
because
those
are
the
sorts
of
features
that
those
that
those
built-in
drivers
have
and
so
yeah
like
have
you
have
you
spent
any
time?
Thinking
about
you
know
what
could
an
interface
between
a
csi
driver
and
a
system
administrator
and
an
admission
controller
look
like
for
for
adding
and
removing
those
sorts
of
things
from
from
a
configuration,
or
I
mean
I.
I
really
do
not
wish
to
put
you
on
the
spot.
A
H
Yeah,
so
we
actually
have
I'm
so
I
I
spoke
with
david
eads
a
little
bit
about
this
from
sig
was
it
I
think,
if
he's
a
sig
author
from
awesome
yeah,
so
one
of
the
things
that
he
mentioned
is
part
of
the
rationale
why
this
is
allowed
for
restricted
is
so
that
csi
drivers
could,
in
theory,
create
their
own
admission
that
they
gets
installed
with
the
driver.
So
the
driver
can
say
this.
H
This
sort
of
behavior
is
allowed,
but
that
particular
behavior
you
need
to
have
either
baseline
or
restricted
permissions
so
that
you
can
using
some
of
the
annotations.
H
So
one
thing
I
ins
trying
to
think
about
think
this
through
this-
might
be
something
that
sig
storage-
we
probably
have
to
talk
about
as
well
as
similarly
providing
either
annotations
or
labeling
schemes
such
that
a
csi
driver
can
declare
or
a
cluster
admin,
can
add
to
the
csi
driver
declaration
that
this
driver
is
allowed
for
baseline
and
higher,
or
it's
allowed
only
for
restricted
or
it's
allowed
for
or.
A
H
Restricted
it's
a
lot
for
everyone,
but
if
it's
allowed
for
only
restricted-
or
it's
only
allowed
for
privileged
namespace
or
namespaces
that
allow
privileged,
that
was
an
initial
thought
of
like
this
might
be
a
way
to
tackle.
The
problem,
at
least
in
a
sort
of
more
more
first
class
can
be
generally
supported
way.
A
A
H
Are
some
drivers
that
are
truly
just
they
only
provide
ephemeral
volumes?
I
think
storage
has
the
secret
store
driver
is
one
where
it's
intended
to
it's
a
bridge
that
lets
you
get
sealed
secrets
from
either
a
cloud
provider
or
a
third
party
service
into
the
cluster
and
potentially
do
it.
Do
it
in
a
way
that
that
sealed
secret
is
not
available
as
a
kubernetes
secret,
although
that
is
but.
A
A
A
Yeah,
I
think
that
that's
a
really,
I
think
it's
a
really
interesting
area
for
future
work,
and
I
appreciate
hearing
a
couple
of
of
possibilities
that
have
been
been
brainstormed
off
the
top.
What
do
we,
what
do
we
think
who
is
who
has
who
has
played
with
csi
drivers?
What
have?
What
have
your
use
cases
been
like.
G
I
mean
I've
only
played
with
them
for
nefarious
purposes,
but
the
nefarious
use
cases
are
real.
I've
never
actually
heard
anyone
else
talk
about
them.
So
it's
pretty
cool
to
hear
about
this.
Thank
you
for
coming
here
and
bringing
it
up.
A
A
Now
I
mean
I
would
say
then,
for
those
of
us
who
are
who
are
here
if
we
don't
have
a
lot
of
of
real
world
experience
with
with
playing
with
these.
You
know,
let's
see
if
we
can
find
some
folks,
that
you
know
that
we
know
in
the
community
who
have
been
been
using
these
and
try
to
get
them
involved.
In
this
conversation,.
E
E
One
question
I
have
adam:
maybe
you
mentioned
this
while
discussing
or
in
one
of
the
caps,
so
I'm
thinking
of
examples
like
network
policies
or
node
tents
obtains
which
are
somewhat
related
to
security,
but
they're,
not
part
of
bot
security
standards
so
similar
to
that,
is
it
possible
to
allow
or
disallow
using
labels
with
csi
drivers?
I
will
allow
to
use
in
a
specific
namespace
without
modifying
port
security
standards,
because
the
built-in
port
security
centers
may
not
even
be
used
by
some
clusters
who
will
end
up
using
external
admission
control.
H
I
honestly
am
not
aware
of
how
I
actually
don't
think
pod
security
standards
today
does
allow
you
to
rest
or
lets
you
restrict
it.
The
usage
of
csi
drivers.
I
think
it's
I
think.
The
standard
today
is
csi
volume
mounts
and
pods
are
allowed,
even
for
the
restricted
profile.
A
The
citizen
men
put
them
there,
they're
they're,
yes,
the
sysadmin
put
them
there
and
therefore
we
have
to
either
trust
or
distrust
them,
because
there
is
so
much
there's
so
much
potential
diversity
in
what
a
csi
driver
could
do
that
without
help
from
the
csi
driver.
Yeah
like
pod
security
admission
can't
really
tell
whether
it
is
it
is
dangerous
to
allow
or
whether
it's
fine
to
allow
but
yeah.
I
wonder
like
like
with
the
the
with
the
thing
that
that
you
were
asking
like.
A
I
wonder:
if
does
the
csi
driver
get
enough
information
about
the
volume
request
that
it
could
itself
have
an
opinion
about
whether
or
not
it
would
service
that
request
like
like?
Could
it
be
possible,
in
principle,
for
a
csi
driver
to
have
a
list
of
namespaces
that
it
allowed
itself
to
be
used
from
and
not
service
requests
that
are
coming
from
other
namespaces.
E
Yeah,
that's
what
I
was
hoping
because
for
me
it
seems
like
pod.
Security
standards
would
be
for
specific
fields
that
have
direct
correlation
to
a
security
feature
in
either
linux
or
kubernetes
and
csi
driver
and
which
ones
we
get
to
mount
are
not
directly
a
security
feature.
So
in
that
case,
if
it's
moved
outside
of
that
and
the
drivers
get
to
decide
which
namespace
it's
allowed,
maybe
it
could
be
another
option,
and
this
is
all
in
the
air.
Obviously,
so
once
we
have
something
written
down,
we
can
probably
discuss
more.
H
Yeah,
so
I
think
with
if
I
remember
right
with
the
csi
spec,
I
think
csi
when
you,
when
the
csi
driver
receives
the
sort
of
request
to
mount
the
volume.
I
think
it
is
allowed
to
get
that
pod
information,
but
I
think
that's
an
opt-in
thing.
I
don't
know
if
necessarily
the
like,
you
can
configure
a
csi
driver
such
that
that
pod
information
does
not
get
to
the
csi
driver.
H
E
Yeah,
that
makes
sense
I
mean
I
know
we're
close
to
finishing
on
time,
so
I
am
very
interested
to
contribute
discuss
if
you
come
up
with
some
kind
of
a
cap
or
any
other
way
to
discuss.
Let
me
know.
A
Yeah
it
sounds
like
there's
enough.
It
sounds
like
there's
enough
possibilities
that
have
been
sort
of
brainstormed
already
that
that
could
be
put
into
a
google
doc,
and
you
know
shared
with,
like
sig
off
for
for
a
pod
security
standards,
kind
of
look
at
it.
You
know,
since
they
they
write
and
maintain
all
of
that
code.
A
Sig
storage,
the
folks,
the
folks
here
I
think,
would
would
find
that
highly
interesting
and
yeah
like
try
to
try
to
match
up
use
cases
that
that
folks
would
have
for
how
they
would
want
to
configure
the
interaction
between
csi
drivers
and
and
pod
security
admission
to
implementations.
That
would
support
those
kind
of
user
stories.
A
Being
there,
I
think,
if
yeah,
I
think,
if
you
could
jot
them
down
and
and
float
them
to
to
sig
off
and
and
seek
storage
like,
I
think,
they're,
I
think
there'd
be
interest
in
that.
A
Yeah
yeah
yeah,
because
it's
like
I
don't
know,
I
think
a
github
comment
thread-
is
a
great
place
to
worry
about
details
and
to
make
sure
that
nothing
gets
missed.
But
I
think
it's
a
really
hard
place
to
to
have
our
feelings
out
when
we
think
we
know
we
want
something,
but
we're
not
yet
sure
what
we
want
right.
H
A
Does
does
anyone
else
have
anything
to
to
add
to
this.
A
That
being
the
case
will
do
what
we
always
do
and
say.
Thank
you
so
much
for
coming.
It's
been
very
great
to
to
have
everybody's,
have
everybody's
thoughts
and
and
caring
in
the
room
here
and
look
forward
to
seeing
everyone
again
soon.