►
From YouTube: sig-auth bi-weekly meeting 20210421
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
before
we
go
through
the
kind
of
the
outstanding
unresolved
issues,
is
there
any
particular
topics
or
concerns
that
anyone
wanted
to
discuss
today.
B
I
was
interested
in
two
aspects:
the
integration
with
essay
linux
by
default,
because
I
saw
those
update-
and
I
honestly
don't
know
what
the
update
means
to
me
and
also
how
volumes
whether
we're
going
to
try
to
do
anything
with
volumes,
whether
we're
not
in
terms
of
either
exemptions
or
or
something.
A
Anything
else
that
anyone
wants
to
make
sure
to
cover
actually
I'd
like
to
touch
on
the
windows
conversation
as
well.
I'm
not
sure
if
I
see
ria
is
here,
I'm
not
sure
if
any
of
the
other
folks
that
were
involved
in
that
discussion
are
here.
A
Thanks
was
anyone
from
sick
instrumentation
able
to
join.
A
I
just
pinged
that
channel
again
to
see
if
we
can
get
some
input
on
the
question
of
metric
cardinality.
I
think
that's
kind
of
the
main
open
question
there.
A
A
Yeah
that
that
seems
like
something
that
would
be
beta-blocking
like
having
the
metric,
solid
and
before
it's
enabled
by
default
seems
important.
Yeah
agreed.
We
have
a
concept
of
like
alpha
beta,
ga
metrics.
Now
too
right,
it's
just
alpha
and
stable.
There's
no
beta.
D
A
A
Let
me
scroll
down
to
yeah,
so
jordan,
you
had
an
open
question
about
what
would
happen
if
type
is
on
set
actually,
maybe
before
we
do
that,
let
me
just
give
an
overview
of
what
we
currently
have
there.
Since
david
said,
he
was
a
little
confused.
A
Yeah,
so
what
the
the
current
proposal
that's
written
down
here
is,
it's
basically
saying
you
can
set
well
so
unset
se,
linux
options
is
allowed,
so
se
linux
options
is
a
field
on
the
pod,
or
I
think
it's
yeah.
It's
it's
hot
security
context
and
also
the
container
security
context
yeah,
and
so
you
can,
if
you
do
set
it
then
type
this
is
what's
written
here
is
type
must
be
set
to
one
of
these
four
values.
A
Level
can
be
set
to
anything
or
left
unset.
I
think,
and
role
in
user
must
be
left
and
set.
C
Yeah
yeah,
so
I
think
if
I
got
like
jordan's
question
correctly
like
can
we
keep
the
type
on
set
as
well?
And
yes,
we
can,
because
the
container
runtime
will
pick
the
default
for
you.
Okay,
yeah.
C
C
C
Like
I
I
I
I
I
can't
imagine
us
using
that
like
if,
in
the
future
it
changes,
we
may
have
to
re
revisit
this,
but
today,
like
the
system,
you
and
the
defaults
that
are
used
are
enough
for
container
isolation
system.
You
object.
C
C
And
then
possibly
if,
if
there
is
any
kind
of
in
the
process,
communication
or
something
or
if
there
is
any
breakout,
you
may
be
able
to
read
any
files
written
by
that
by
that.
C
B
Okay,
so
so
let
me
just
make
sure
I've
got
it
in
a
normal
case
where
we
say
someone
has
not
broken
out
of
their
pod.
Being
able
to
specify
your
level
allows
you
to
have
two
pods,
which
can
do
things
like
share
a
volume
mount
and
and
have
that
piece
work
or
if
you
control
both
of
them
and
put
them
inside
the
same
pid
name
space,
they
can
communicate
some
sort
of
inner
process
communication.
You
don't
spoke
about.
B
Yeah,
and
in
order
to
have
someone
exploit
this,
they
would
have
to
know
you
would
either
have
to
be
sharing
a
volume
out
which
wouldn't
happen
unless
you
had
already,
or
there
would
have
had
to
be
a
bug
somewhere
else
that
allowed
you
to
escape
from
your
container
right
and
mount
their
file
system.
Okay,.
B
Okay-
and
we
would
choose
this
as
baseline,
instead
of
restricted,
because
there
is
that
case.
If
you
have
one
bug
in
a
container
runtime
and
someone
knows
the
level
that
they
would
choose
from
some
other
name,
space
and
they're
able
to
exploit
that
bug,
they
could
attack
and
that's
why
it's
not
restricted.
Is
that
to
have
that.
C
A
By
allowing
these
specific
types,
another
admission
plugin
could
certainly
add
constraints
right
like
around
like
forcing
it
to
be
log
reader
or
forcing
level
to
be
unique
per
namespace.
A
C
A
A
B
Strategy
so
you're
saying
that,
because
we
bound
these
particular
types,
allowing
the
selection
of
a
level
in
restricted
is
a
reasonable
trade-off
to
me.
A
E
C
I
think
probably
like
even
limiting
the
types
and
restricted
may
not
may
make
much
more
sense
like
because
the
other
ones
is,
the
inner
t
is
for
allowing
someone
to
run
a
system
d
inside
a
container.
The
log
reader
t
is
like
strictly
for
reading
logs
and
the
kvmt
is
for
running
things
like
kata,
which,
like
they
add
one
more
level
of
vm
security,
which
I
I
mean
so
restricting
it
to
just
container
t
probably
doesn't
make
it
any
more
safer
in
restricted.
A
Oh,
I
don't
know
like
container
and
non-container
that
that
that's
that
cuts
a
lot
of
things
out
so
that
that
cuts
out
like
host
more
host
level,
things
like,
but
it
doesn't
protect
you
from
other
containers,
things
at
all.
Obviously,.
C
C
A
A
B
So
that
yeah
that
bit
about
sorry,
I
had
me
button
issues
the
bit
about
container
and
knit
t:
does
it
make
sense
to
run
containerized
smd
inside
of
restricted
pods?
That's
a
thing.
I
just
don't
know
the
answer
to.
A
D
A
A
Yeah
same
here
I
guess
the
the
last
question
I
have
on
this
is:
should
we
update
the
pod
security
standards
with
the
same
advice,
I
would
like
to
see
those
be
more
deterministic.
A
D
We
should
make
sure
all
the
implementations
follow
the
same
standards.
A
Yep
sounds
good
all
right.
I
suggest
we
talk
about
volumes
next,
since
we
were
just
kind
of
touching
on
volumes
a
bit.
B
I'm
actually
I
have.
I
guess
I
have
two
concerns.
One
is
that
I
don't
actually
know
offhand
which
volume
plug-ins
are
allowed
for
baseline
and
restricted,
and
then
I'm
not
sure
how
we're
handling
csi
drivers
or
whether
we
would
handle
flex
volumes
at
all
or
whether
we
just
consider
them
like
now,
you're
too
old,
move
to
csi.
A
F
A
And
then,
for,
let
me
just
share
what
we
say
for
restricted
and
we
can
talk
about
whether
this
is
the
right
restrictions
or
not
for
restricted.
We
essentially
say
you're
allowed
to
use
the
atomic.
I
think
it's
called
like
atomic
writer,
basically,
the
ephemeral
volume
so
empty
deer
secrets,
config
maps
downward
api
projected
volumes
all
those
ones
and
then
for
everything
else.
Persistent
volume
claims,
yeah
yeah,
so
for
everything
else,
we
say:
use
a
persistent
volume
with
a
persistent
volume.
D
B
A
csi
driver
you
could
do
could
do
anything.
Someone
wanted,
whether
that's
a
good
thing
or
a
bad
thing,
and
whether
baseline
should
whether
baseline
should
assume
that
any
csi
driver
is
acceptable
to
use
it's
not
immediately.
D
I
I
think
most
csi
drivers
do
require
host
path
mount
for
the
the
cube
cubelet
root
directory,
though.
A
Right,
like
I
mean
most
of
the
csi
drivers,
or
at
least
most
of
the
initial
ones,
are
out
of
tree
implementations
of
the
cloud
storage
volumes
that
are
disallowed
here,
like
gce
versus
direct
access
to
gc,
persistent
disk,
and
you
know,
cluster
or
ceph
or
azure
file
or
azure
disk
like
there
are
csi
drivers
for
those
things,
and
so,
if
you
allowed
using
a
csi
volume
but
for
the
azure
file
and
azure
disk
or
gc
persistent
disk
like
that,
would
just
let
you
use
those
things
via
their
csi
drivers.
B
And
here's
where
my
knowledge
ends
like
is
the
csi
driver,
somehow
better
enough.
We
should
do
that
or
are
we
gonna
say
like?
No,
these
things
are
roughly
equivalent
and.
A
My
take
on
this
is
that
kind
of,
like
some
of
the
other,
it's
sort
of
a
similar
case
to
some
of
the
other
extension
points
that
we
talked
about
where,
if
you
have
a
csi
driver,
first
of
all,
your
cluster
administrator
or
someone
with
administrative
privileges
had
to
have
installed
that.
A
So,
if
it's
available,
then
someone
with
elevated
privileges
already
made
it
available,
and
then
at
that
point
we
can
say
well,
if
you
want,
you
know
if
this
is
a
privileged
volume.
If
you
want
to
place
additional
restrictions
on
this,
then
that
csi
driver
should
ship
with
or
the
administrator
should
set
up
additional
policy
restrictions
through
admission
control.
A
A
B
That
seems
unfortunate
to
me.
Can
we
can
we
consider
a
way
to
indicate
whether
a
particular
yeah,
I
can't
type
and
take
note-
I
can't
type
and
talk
at
the
same
time
can?
Can
we
consider
whether
there's
a
way
to
allow
a
cluster
administrator
to
indicate
that
a
particular
csi
driver
should
be
allowed
a
little
restricted
like,
for
instance,
I
know
of
one
where
the
idea
is.
B
The
goal
would
be
to
have
that
turn
on
for
restricted
users,
so
they'd
be
able
to
use
it.
It
seems
fairly
reasonable
I'd
like
to
make
that
use
case
possible.
A
I'm
I'm
trying
to
remember
like
parameter
wise
if
we
like,
for
something
like
the
secret
csi
secret
driver
like
would
allowing
a
pod
to
fully
spec
out
a
csi
volume,
give
it
full
control
over
all
of
the
parameters
passed
to
the
driver
in
a
way
that
would
let
it
pull
things
that
maybe
weren't
supposed
to
be
granted
to
it.
D
I
think
the
way
we
are
controlling,
that
is
by
using
our
back
on
the
provider
class
cr
custom
resource
since
that's
namespace.
D
So
the
idea
is,
if
you
have
access
to
create
the
provider,
the
secret
store
provider
class
custom
resource
in
that
name
space,
then
your
pod
can
reference
that.
B
Could
we
not
annotate
those
and
say
yeah
cluster
admin
created
it
and
the
it's
created,
because
I
don't
know,
probably
someone
provided
it
and
they
could
say.
I
know
that
this
one,
the
way
it
works
is
safe
to
be
added
to
restricted
for
the
goals
of
restricted
and
we
could
avoid
configuration
on
like
a
per
cluster
level
like
yeah,
I'm
fine
tuning,
how
my
you
know,
restricted
baseline
privilege
works
instead
of
like
that,
we
would
associate
it
with
the
csi
driver.
A
B
Another
set
of
stuff
well
or
I
mean,
if
we
put
this
as
a
pure
to
a
runtime
class,
it
could
be
in
that
config
file,
which
is
not
as
flexible
but
does
at
least
provide
a
way.
If
you
own
the
cluster,
you
can
say
restricted.
People
can
use.
E
E
A
E
A
E
A
Pods
have
also
added
the
ability
to
have
an
ephemeral
volume
source
that
goes
through
like
a
pvc
pv
cycle,
I'm
a
little
skeptical
about
the
performance
of
that
like
every
pod
that
stands
up
spawns
a
pvc
which
gets
a
pv
provision
which
then
mounts,
and
it
gets
cleaned
up
on
tear
down.
I'm
not
super
thrilled
about
that
being
the
story.
B
I've
got
another
example
of
I
can
find
the
repo
after
this,
but
I
got
another
exam.
This
came
up
because
someone
said
hey.
I
have
this
csi
driver
and
like
do
I
want
your
csr.
Oh
your
csi
driver
is
pretty
cool
and
then
it
was
wait.
I'm
not
gonna
be
able
to
use
your
csi.
A
A
Based
on
what
I'm
hearing
here,
I
think
I'm
gonna
propose
that
we
proceed
with
the
restrictions
that
we
have
today,
and
I
think
this
is
something
that
we
should
continue
to
explore.
I
don't
think
that
this
decision
needs
to
block.
A
A
The
fact
that
csi
is
allowed
by
baseline
means
that,
unless
you
are
really
locking
down
your
namespaces
and
make
use
of
this,
I
am
on
the
fence
about
it
and
restricted.
B
B
B
Like
yeah,
I
want
to
inject,
I
mean
it's,
it's
a
way.
It's
an
easy
way
to
inject,
without
fanning
out
data
at
every
namespace
right
like
if
I
had
to
do
over
again,
would
I
avoid
placing
the
config
map
containing
here's,
how
you
connect
to
every
or
to
the
qapi
server
in
every
namespace
and
duplicating
duplicating
it
everywhere?
I
probably
would
have.
A
B
Okay,
I
can
admit
to
not
having
tried
that.
A
Yeah,
I
I
don't
know
I
I
think
that
the
parameters
is
probably
what
it
comes
down
to
and
like
controller
of
those
parameters
and
that's
definitely
a
double-edged
sword
like
if
you
give
a
user
control
over
parameters.
They
can
set
sensitive
parameters
to
bad
things,
and
if
you
don't
give
them
control
over
parameters,
then
it's
very
hard
for
them
to
like
express
what
they
want
and
you
sort
of
get
into
like
needing
a
ton
of
persistent.
B
Volumes
now
that
varies
between
what
a
csi
driver
is
doing.
I
mean
there's
some
csi
drivers
where
no
matter
what
you
pass
them,
they're
not
going
to
be
dangerous.
They
can't
be
dangerous
because
the
way
they're
designed
right,
it's
a
real
shame,
not
to
allow
those
in
restricted,
because
I
know
that
I
want
to
use
them,
and
I
know
that
I
won't
be
able
to.
E
D
By
the
way
I
haven't
heard
too
many
people
complain
about
this,
so
I'm
wondering
if
we
should
maybe
gauge
like
how
many
people
want
to
be
able
to
use
csi
as
part
of
when
they
have
restricted.
A
B
Standard,
but
in
openshift
we
do
and
we
do
want
to
be
able
to
mount
say
a
particular
csi
driver
right,
we're
like
we
have
one
being
created
now
or
that
exists
now
and
we're
looking
at
using
it.
And
this
would
stop
us
from
being
able
to
run
restricted.
D
So
for
secret
store
csi,
we
actually
started
with
pv
in
pvc
and
then
we
moved
to
inline
because
it
was
easier
to
use
for
users
so
that
they
don't
have
to
like
go
create
a
pv
pvc.
Every
time.
D
B
It
seems
like
you
could
have
a
policy
where
the
default
take.
No
action
is
exactly
described
for
baseline,
but
a
csi
driver
that
is
known
to
be
safe
for
restricted
could
be
added
to
exclude
this
similar
to
either
a
runtime
class
well
yeah,
similar
to
a
runtime
class.
That's
probably
the
most
straightforward
thing.
We
have
to
compare
it
to.
A
Yeah
I
the
thing
that
I'm
not
thrilled
about
like
inspecting
api,
like
I,
I
don't
want
it
to
be
stateful
in
its
evaluation
like
if
you
have,
if
you
know
the
policy
level
you
want
to
apply,
I
don't
want
you
to
have
to
look
at
other
things
in
cluster
state
to
decide
whether
a
particular
object
is
good
or
bad.
A
Yeah,
the
vault,
the
volume
attributes
would
not
be
accessible
to
someone
who
only
had
the
pvc
or
the
ephemeral
driver,
and
actually
those
attributes
are
good
examples
of
the
types
of
things
I
wouldn't
want.
A
restricted
pod
being
able
to
set
like.
E
B
B
B
A
I
will
add
an
unresolved
section
for
this,
with
some
of
the
options
that
we
discussed
here.
B
There's
a
csi
host
path
driver.
Do
we
special
case
that
one
is
excluded
for.
A
Baseline,
no,
we
tell
the
cluster
admins
not
to
install
it.
Okay,
that's
one
of
those
examples
where,
like
you,
can
do
anything
in
a
csi
driver,
even
things
you
shouldn't
do
yeah,
you
could
add
a
csi
driver
that
lets
you
exec
arbitrary
code
as
root,
given
the
print
like
parameters,
my
format,
my
api
server,
hard
drive,
csi
driver,
so
I
think
it
you
know.
A
A
A
G
B
A
A
A
The
baseline
policy
for
capabilities
is
adding
additional
capabilities
beyond
the
default
set
must
be
disallowed.
The
problem
is
that
kubernetes
doesn't
define
the
default
set,
docker
does
and
we
sort
of
assume
that's
the
default
set.
The
question
is:
do
we
want
to.
A
A
Remove
the
the
trade-off
here
is
being
able
to
add
capabilities
that
are
already
in
the
default
set
is
nice
if
you
want
to
take
more,
like
kind
of
explicit
allows
approach
where
you
say,
drop
or
remove
star,
which
removes
all
capabilities
and
then
allow
or
add
only
the
ones
that
you
know
you
need
the
problem
with
this
approach
is
it
means
that
runtimes
can't
customize
the
default
set,
which
you
can
do
through
container
d,
probably
cryo.
A
A
Policy,
I
definitely
think
that
forbidding
ad
makes
sense
and
restricted.
A
Since
we
don't
allow
that
to
run
as
group,
there's,
not
really
any
reason
to
add
a
capability
and
we
don't
handle
the
can
never
remember
the
name
for
the
term.
The
flexible
non-root
set
of
capabilities.
C
A
Future
next
year,
it's
always
a
big.
When
we
get
it,
we
can
actually
say
like
for
a
container
that
has
username
spaces
set
like
this
is
what's
allowed
for
one
that
doesn't.
This
is
what's
loud.
A
That
will
make
it
easy
for
us
to
do
that
when
that
time
comes,
so
I
am
out
of
my
depth
capabilities
wise,
so
I
will
defer
to
tim
and
menaul
and
whoever
else
understands
them.
C
A
Would
that
would
taking
the
most
widely
used
central
defaults,
the
docker,
the
faults
that
were
linked
like
cover
most
of
the
places
where
people
are
dropping
all
and
then
adding
back
a
couple
like
for
someone
to
get
in
trouble.
For
with
that
approach,
their
runtime
would
have
to
have
customized
things
and
they
would
have
to
be
dropping
all
and
then
adding
back
a
customized
thing.
That's
outside
the
normal
docker
set
of
default.
So.
C
A
Right
so
everything
renault
posted
back,
override
set
id
owner
set
group
id
said
you
would
set
pcap
net,
bind
service
and
kill
all
of
those
are
included
in
the
docker
default
list.
The
docker
developer
includes
another
seven
that
are
not
present
in
that
list,
so
if
the
docker,
so
I
because
you
can
always
layer
on
another
protection
on
top
of
this,
to
crank
down
permissions
tighter
if
the
docker
default
list
includes
the
defaults
from
other
container
runtimes.
We
see
that
seems
like
an
okay
list
to
work
from
for
baseline.
E
A
A
B
There's
well,
I
guess
I
don't
know
what
all
these
do
and
I
suppose,
if
marinol
told
me
to
turn
on
all
the
ones
the
docker
turned
on.
I
wouldn't
know
to
tell
him
no.
A
So
forgive
my
ignorance.
If
you
don't
add
net
raw
cabinet
raw,
does
that
mean
you
don't
have
it
or
does
it
depend
on
what
container
you're
running.
A
C
A
A
A
F
Yeah,
so
I
think
mark's
concerned
with
this
was
just
having
to
do
extra
configuration
and
initially
he
thought
it
wasn't
supported
from
the
docker
side
as
well.
It
looks
like
docker
does,
support
it
and
then
by
using
the
handler
of
docker
and
then
for
needing
additional
configuration,
I
looked
into
it
and
there's
a
handler
there's
a
default
handler
for
all
of
the
deployments.
F
F
Yeah,
I
think
I
think
so
mark
is
out
this
week.
I
would
want
to
cover
it
over
with
him,
but
I
think
his
biggest
concern
was
that
you'd
have
to
go
kind
of
modify
the
nodes
to
add
one
of
these
runtime
classes
and
modify
the
container
d
configurations
which
isn't
I
don't
think,
is
the
case,
because
there
is
a
handler
that's
there
by
by
default,
with
their
standard
container
d
deployment.
A
There
is
one
concern
that
came
up
around
the
use
of
runtime
class,
which
is
suppose
I
have
a
hybrid
cluster,
running
windows
and
linux
nodes
and
everything
is
running
docker
ship,
I
know
docker
shin
is
going
away.
So
maybe
this
is
not
something
to
worry
about
too
much,
but
the
same
would
apply
to
say
container
d
with
the
same
name
for
the
runtime
handler
configured
everywhere.
A
I
create
a
pod,
I
assign
it
the
windows,
runtime
class,
and
then
I
explicitly
set
the
node
name
to
make
it
land
on
one
of
the
linux
nodes
it
gets
exempted
because
our
policy
thinks
it's
a
windows
pod
and
it
ignores
the
scheduling
constraints
because
I
set
the
node
name
manually
and
then
because
it's
using
the
docker
default
handler,
it
gets
to
run
without
constraints
on
the
linux.
One.
A
So
the
issue
here
is
mostly
having
a
runtime
handler
that
works
everywhere
and
then
also
not
having
I
mean.
Ideally,
we
would
have
a
separate
admission
controller
for
windows
that
says
you're
running
with
the
windows
runtime
class,
so
you're
not
allowed
to
set
any
of
the
linux
security
options.
A
F
By
using
a
combination
of
runtime
classes
and
node
selector
or
not
selectors
but
node
names,
you
could
potentially
schedule
a
pod
on
linux
that
was
technically
exempt
from
from
windows.
So.
A
Yeah
docker
shim
is
maybe
even
removed
in
121..
It's
certainly
deprecated,
so
no
no,
never
get
that
removed.
B
You
try
to
check
the
the
security
context
bits
on
a
pod
right.
You
still
be
able
to
mount
mount
these
volumes
that
are
restricted
and
you
want
to
get
checked
right.
B
A
There's
also
talk
of
adding
a
win
like
windows,
specific
security
context
options.
As
soon
as
we
have
a
standard
way
of
identifying
windows
pods,
we
can
actually
build
windows
policy
into
this
admission
controller
that
we're
designing,
which
I
think
is
the
ideal
solution,
long
term
at
least
we
are
over
time.
So
I
I
need
to
drop
off
I'd
like
to
be
respectful
of
everyone
else's
time
as
well.
I'm
sorry
we
sort
of
ran
out
of
time
for
the
windows
discussion.
A
I
think
we
do
need
to
continue
having
these
so
I'll,
make
sure
that
we
put
windows
at
the
top
of
the
agenda
next
time.
So
thanks
everyone
for
coming.