►
From YouTube: Sig-Auth Bi-Weekly Meeting for 20211027
Description
Sig-Auth Bi-Weekly Meeting for 20211027
A
Hey
everyone:
this
is
the
sig
off
meeting
for
october
27th
2021..
We
have
a
pretty
light
agenda
today,
but
we
can
go
ahead
and
get
started.
A
The
folks
that
wanted
to
talk
about
the
volume
security
standards.
Are
you
all
here.
A
Hi
did
you
I
have
the
doc
open.
If
you
all
want
to
talk
about
it
and
I'll
give
the
pitch
of
where
you
guys
are
trying
to
solve.
B
Yes,
thanks
thanks
yeah,
so
we
have
this
issue
a
potential
security
problem
with
the
volume
mode
conversion-
and
we
discussed
about
this
approach
already
in
six
storage,
so
just
want
to
bring
it
up
with
a
sick
off
and
see
what
you
guys
are
thinking
ronald.
You
want
to
take
it
over.
C
Hey
yeah
sure,
so
I
think
zhing
has
had
a
bit
of
conversation
in
the
past
with
I
think
either
author
security
about
this,
and
basically
we
have
a
small
issue
or
a
potential
security
issue
with
our
volume
snapshot
feature
and
we've
been
discussing
recently
about
how
we
would
we
would
want
to
sort
of
address
that.
So
what
we
want
to
run
by,
I
think,
without
getting
into
too
much
detail
about
the
snapshot.
Issue
itself
is
actually
introducing
something
called
something
like
volume.
C
Security
standards
are
basically
based
off
of
the
pod
security
standards
that
was
sort
of
introduced
recently
right.
C
So
if
you
can
sort
of
scroll
down
below
the
non-goals
yeah,
so
so,
just
like
how
bot
security
standards
sort
of
introduced
like
these
standards
and
then
sort
of
a
controller
to
enforce
that
standards,
but
also
not
very
limiting
in
a
way
we
were
thinking
of
introducing
something
similar
but
related
to
volumes
instead
of
just
spawns.
C
C
So,
if
I
just
had
to
summarize
this
document,
so
we
wanted
to
introduce
like
volume,
security
standards
which
are
basically
right
now
privileged
and
restrictive,
where
privileged
would
be
the
least
restrictive
policy
and
restrictive
would
be
the
most
and
right
now.
We
just
have
one
use
case
right.
This
snapshot
issue
that
we
sort
of
have
known
about
and
want
to
solve,
so
it
sort
of
made
sense
for
now
to
just
have
like
two
standards
defined.
C
In
addition
to
that,
we
also
wanted
to
introduce
this
admission
controller,
the
where
like,
where
to
introduce
that
mission
controllers.
It's
probably
a
bit
too
early
to
talk
about
that.
It's
probably
going
to
come
up
in
the
design,
but
the
idea
is
to
have
an
ambition
controller
which
is
going
to
go
ahead
and
sort
of
look
at
the
standard
and
look
at
the
policy.
That's
applied
to
a
particular
namespace
and
then
go
ahead
and
try
and
enforce
that
policy.
C
Right
so
just
like
pod
security
standards
and
and
the
whole
the
way
that
works
right
now
we
were
thinking
of
introducing.
C
Excuse
me
in
three
modes
where
we
have
enforce
audit
and
one
and
again,
if
enforce,
is
labeled
onto
a
particular
namespace.
Then
those
standards
applied
to
that
namespace
are
enforced.
When
this
admission
controller
sort
of
intercepts
calls
to
create
a
volume
in
that
particular
space.
If
it's
audit,
then
that
that's
gonna
be
just
audited
and
if
it's
one,
then
if
the
mode
is
one,
then
we're
just
gonna
want
the
user,
but
go
ahead
with
the
operation.
C
So
a
lot
of
this
proposal
is
again
based
on
bot
security
standards.
We
looked.
We
also
looked
at
board
security
policies
or
something
like
pod
security
policies,
but
sort
of
had
the
idea
that
you
know,
since
that
was
like
the
community
moved
on
from
that
in
terms
of
pods.
It
doesn't
make
sense
to
go
back
to
that
for
volumes,
so
so
yeah,
that's
actually,
basically
just
a
bit.
C
I
didn't
expect
to
finish
it
in
like
three
or
four
minutes,
but
really
the
idea
is
to
get
feedback
from
sigoth
about
what
they
think
about
something
like
this
applied
to
volumes,
in
the
sense
that
when
you
create
a
volume
right
now,
our
use
case
is
just
singular
in
terms
of
restoring
a
pvc
from
a
volume
snapshot,
but
we
want
to
build
something
that
can
be
extended
in
the
future
for
other
potential
sort
of
security
issues
or
concerns
that
may
come
about
so
yeah.
Anything.
C
Comments
from
seagate,
so
the
main,
I
think
the
main
focus
would
be
getting
feedback
on
this
document.
This
is
obviously
very
high
level
at
this
point,
we're
also
drafting
at
cap,
which
we
probably
would
share
with
sigoth
as
well,
but
just
to
get
an
idea
of
of
what
you
guys
thought
when
you
went
through
broad
security
standards
and
would
that
be
a
similar
mechanism
be
applicable
to
volumes
as
well.
D
One
of
the
things
that
we
was
was
probably
important
when
we
started
with
pod
security
standards
was
enumerating
like
the
things
that
we
wanted
to
protect,
and
so
this
seems
to
be
like
describing
a
mechanism
without
a
really
clear
sense
of
what
it's
trying
to
protect
against.
D
So
there's
one
example
use
case
here
which,
like
the
mismatched
mode
thing
like
getting
access
to
a
volume
with
different
modes,
but
beyond
that,
it's
it's
hard
to
see
like
what
is
being
protected
against,
and
so
it's
hard
to
map
that
to
like
this
three-tier,
restricted,
baseline,
privileged
mechanism.
D
Yeah,
so
I
I
guess
the
the
one
example
use
case,
I'm
having
a
hard
time
mapping
to
like
how
how
problematic
is
this
like
running
a
running?
A
privileged
pod
seems
pretty
clearly
privileged,
like
accessing
volume
with
different
modes
might
be
okay,
but
might
not
be
like.
B
I
think
maybe
ronna
can
you
go
over
there.
The
use
case
how
this
problem
is
discovered?
Maybe
we
were
to
the
beginning.
I
think
we
maybe
skipped
that
part.
C
Yeah
sure
so
I
think
so.
The
the
one
use
case
that
we
do
have
is
definitely
well
defined
in
the
sense
so
right
now
the
issue
is,
and
the
reason
we
started
thinking
about
this
is
that
a
user
can
basically
create
a
pvc
with
a
volume
mode
of
block
and
then
write
malformed
data
to
it
and
then
take
a
snapshot
of
this
volume.
C
And
then,
when
they
restore
the
snapshot
as
another
pvc,
they
can
actually
specify
the
file
the
volume
mode
as
file
system
right.
Even
though
it
was
originally
a
block
volume.
And
then
the
potential
is
that
when
a
user
uses
the
second
pv
pvc
and
kubelet
tries
to
mount
it.
C
C
So
so
we
can't
sort
of
reject
that
all
together
and
but
but
at
the
same
time
there
can
be
like
a
malicious
user
that
that
just
wants
to
you
know,
crash
a
system.
So
that's
where
the
sort
of
dilemma
comes
in
and
and
that's
how
we
sort
of
ended
up
at
this
whole,
defining
like
how
do
you
define
who's,
a
privileged
user
and
who's,
not
right
who's
allowed
to
do
this
operation
who's?
Not,
and
that's
where
that's
how
we
got
to
like
this
whole
name,
name,
space
level,
thing.
D
Yeah,
so
one
of
the
reasons
pod
security
standards
were
useful
was
because
there's
like
35
different
things
in
the
pod
spec
that,
like
it's
just
a
very
huge
surface
area
in
terms
of
the
api
and
bucketing
and
simplifying
like
what
what
a
pod
can
do,
was
valuable.
B
So
jose,
if
you
have
any
other
alternatives
that
you
know
we
are
offered
as
well
yeah.
This
is
definitely
just
one
case,
but
because
it
is
a
potential
security
issue
right,
so
we
thought
we
have
to
find
out
find
a
way
to
solve
it,
but
we
couldn't
find
out
other
way
in
a
more
simple
way
to
solve
this
problem.
That's
why
we
thought
you
know
this
security
standard
seems
to
be
a
good
approach
that
we
could.
We
could
borrow.
D
I
mean
so
just
straw,
straw
man,
proposals
like
if,
if
the
expectation
is
that,
under
normal
circumstances,
a
pv
won't
be
like
accessed
in
both
block
and
file
system
mode
like
it'll,
be
one
or
the
other,
or
a
snapshot
like
if
you
required
some
indication
on
a
cluster
scoped
object
that
this
mixed-mode
access
was
okay
or
expected.
B
This
is
a
not
a
cluster
scope.
This
is
a
user
scope,
so
this
is
basically
a
special
use
case
for
backup
and
restore
and
backup
vendors
use
this
to
take
efficient
backups.
So
they
do
use
this
convert
conversion
because
you
know
normally
your
volume
could
be
could
be
used
fastest
mode,
but
underneath
it
is
a
block,
so
they
convert
it
temporarily
to
block
during
backups.
B
But
then
you
know
when
they
restore
it.
Of
course
it
will
restore
back
to
its
original
mode.
So
it's
definitely
a
convenient
way
for
backup
vendors
to
do
efficient
backups.
But
then
there
is
this
potential
potential
problem.
Potential
security
problem.
E
There
so.
B
Well,
this
is
a
what
most
plugin
would
allow
this
one.
Normally,
it's
not
the
plugin,
it's
the
backup
vendor
we'll
be
doing
this
temporary
conversion.
So
it's
not
like
at
the
see
the
driver
level
actually
so
because
backup
vendor
when
they
do
that
operation.
It's
going
to
it's
going
to
create
this
a
temporary
temporary
volume
and
changing
that
to
block
mode,
so
they
can
get
the
change
blocks.
A
If
I
could
ask
a
question
on
this,
so
I
think
the
feedback
I'm
getting
from
jordan
is,
it
seems,
like
you,
have
one
very
crisp
use
case
and
while
it
seems
probably
important
to
handle
it
somehow
like
there
seems
to
be
like
there
is
a
potential
for
quite
significant
harm.
If
it
is
exploitable
it
might
not
warrant
the
complexity
of
something
like
psp
or
pot
security
standards.
A
A
But
you
could
imagine
that
when
such
an
admission
controller
is
enabled
it
would,
by
default,
disallow
change
in
mode
unless
the
actor
that
was
requesting
it
or
or
some
out
of
band
way
like
I
don't
know
some
extra
permission,
that
the
actor
has
it's
allowed
to
request
it
with
a
different
mode
than
it's
supposed
to
be
in
so
it
would
be.
You
know
like
expressed
through
some.
You
know
admission
restriction
in
the
kubernetes
api
target,
specifically
towards
this
just
one
use
case
right.
A
So
the
idea
being
that
the
the
backup,
vendors,
their
controller
or
whatever
is
running
that's
doing
this
backup
would
be
obviously
given
that
permission,
because
it's
it's
doing
a
good
thing.
It's
backing
up
your
stuff,
but
normal
users
will
not
be
granted
that
permission
and
or
maybe
they
would
be
under
special
circumstances
where
the
cluster
admin
has
decided
that
it's
safe
either,
because
the
user's
trusted
or
the
thing
that
they're
interacting
with
is
benign.
B
Did
we
consider,
I
thought
we
looked
into
that
earlier?
Did
we
just
a
simple
web
controller?
I
I
don't
remember.
C
So
I
think,
where
we
got
stuck
and
where
we
decided
to
move
to
like
the
standards
was,
where
was
how
do
you
define
who's
a
sort
of
privileged
or
allowed
user
and
who's
not,
and
how
do
we
differentiate
like
within
the
same
name
space
or
something
like
that,
and
then
that's
when
we
move
to
this?
If
I'm
not
wrong.
A
D
Sorry,
when
storage
vendors
do
this
cross
mode
snapshot,
restore
where
what
namespaces
are
they
usually
doing
that
in?
Is
it
the
ones
the
user's
stuff
is
in,
or
is
it
in
a
separate
like
backup,
restore
controller
namespace.
D
Okay,
if
that's
the
case,
then
I'm
not
seeing
how
a
namespace
level
policy
to
allow
a
cross
mode,
backup,
restore.
D
D
B
It's
not
an
attributed
volume
type
well
well,
so,
while
back
of
vendor
they
do
have
to
do
something
that
is
kind
of
a
you
know
privileged
to
to
make
those
things
happen,
but
but
those
are
the
user
workloads
that
they
have
to.
You
know
they
back
up.
If
they
are,
you
know
usernames
visibility
story.
B
They
have
to
be
in
a
username
space
as
well
in
the
middle,
the
data
will
be
moved
to
someplace
else
in
the
middle
right,
so
so
there's
a
transition
somewhere
in
the
middle,
but
when
they,
when
they
convert
this
mode,
that
is
the
the
the
mode
is
specified
on
on
the
pvc
itself
right.
So
that's
why
I'm
saying
that's
the
it's
a
user
mode
thing
when
they
change
that
and
then
and
then
they
get
the
data.
Basically.
D
How
like,
if
the
privilege
is,
if
we're
calling
like
changing
a
mode,
an
escalation
or
a
privileged,
whatever
privileged
operation,
if
that
is
happening
in
the
same
name
space
that
the
user
has
access
to,
that
seems
like
you're
losing
whatever
protection
this
is
providing
for
the
duration
of
that
operation.
If
we're
concerned
about
users
like
maliciously,
you
know
creating
a
pvc
or
snapshot
in
block
mode
and
writing
bad
data
and
then
doing
something
with
it.
But
in
order
for
normal
backup
restore
operations
to
happen,
we
have
to
expose
the
username
space
to
privileged
operations.
B
A
Think
what
jordan
is
saying
is
the
tenancy
model
of
the
hot
security
standards.
Is
that
it's
per
name
space?
What
you're
saying
is
you
need
to
somehow
slice
up
a
namespace,
but
in
like
a
weird
transient,
sort
of
way
like
saying
that
this
privileged
actor
can
somehow
change
the
mode
and
do
special
things,
but
at
the
same
time,
like
normal
users,
can't
leverage
that,
during
the
duration
of
the
operation
like
like.
Maybe
if,
however,.
B
A
Yeah,
presumably
the
user
who's
running
the
app
has
access
to
the
name
space
to
do
like
happy
stuff.
So
if
you're
in
the
middle
trying
to
have
like
a
a
higher,
really
a
more
privileged
actor,
do
something
you
can't
use
the
name
space
as
your
tenancy
boundary.
You
have
to
have
some
other
concept
that
says
that
this
actor
is
allowed
to
do
something
special,
but
not
some
other
actor
right.
A
So,
like
the
like
example,
I
know
that
I
remember
from
openshift
is
the
reason
the
pod
security
standards
have
the
hard-coded
list
of
like
this
user
special.
You
know
as
a
config
flag
is
to
support
openshifts
build
controller
which
does
privileged
operations
in
shared
namespaces
with
other
untrusted
tenants,
but
it
does
so
by
very
carefully
coordinating
with
the
underlying
container
runtime
and
the
kubernetes
api
in
a
very
orchestrated
way,
but
like
it's
only
safe
because
they
control
like
the
whole
vertical
stack,
and
they
can
do
that.
A
But
that's
it's
effectively
impossible
to
guarantee
unless
you
control
the
whole
stack
in
a
very
particular
way.
At
least
in
that
in
that
particular
example,.
F
D
D
D
So
alternative
things
to
consider
are
if
there
is
an
attribute
on
a
cluster
scoped
object
that
could
be
required
to
allow
this
cross
mode
operation.
That's
one
possibility.
Another
possibility
is.
D
A
like
a
some
sort
of
admission
check
that
that
would
actually
do
an
authorization
call.
So
we
an
example
of
this
would
be
when
you
sign
or
approve
certificates.
D
We
actually
do
an
authorization
check
to
make
sure
that
the
api,
requester
signing
or
approving
a
certificate
has
access
to
that
particular
signer
name.
So
you
could
do
something
similar.
You
could
say
if
you're
going
to
mount
a
restore
from
a
snapshot
in
a
different
mode
than
it
was
produced.
You
need
to
have
this
authorization
so
that
there
is
prior
art
to
that
it
gets
a
little
fiddly,
but
it's
doable.
D
I
think
the
question
there
would
be
about
backwards.
Compatibility
like
if
there
are
people
already
doing
this.
What
would
they
have
to
do
or
set
up
in
order
to
continue
to
be
able
to
do
what
they
had
been
doing
without
getting
interrupted.
B
Okay,
so
about
this
authorization
check,
do
you
have
any
any
examples?
There's
something
yeah.
D
I'll
drop
a
link
to
the
what
we
do
for
the
certificates
stuff,
okay,.
G
B
B
This
this
is
just
a
potential
yeah,
so
actually
young
brothers
up
here.
I
think
he
is
reading
those
you
know
kernel.
You
know
potential
colonel
box,
so
he
was
basically
saying
that
if
there
is
such
so
currently
there's
none
that's
why
this
one
is
actually
kind
of
a
we're,
not
saying
this
is
like.
B
We
have
to
fix
right
away.
We
have
you
know,
but
we
can
take
time
to
think
about
how
to
fix
this,
but
it's
just
this
type
of
a
bug
in
the
kernel
it
come.
You
know
from
time
to
time
it
shows
up.
So
if
there
is
such
a
bug,
then,
if
someone,
you
know
write
this,
you
know
more
formed
data
and
then
it
could
potentially
cause
this
to
happen.
So
that's
yeah,
that's
how
we
come
up
with
this.
We
actually
send
an
email
to.
There
was
a
security
group.
B
This
was
maybe
a
few
months
ago
see
we
actually
asked
about
this
problem.
So
but
then
you
know,
I
think
we
were
told
that
yeah.
Maybe
this
is
not
a.
B
Maybe
it's
not
like
a
urgent
or
something
right,
but
we
still
should
look
into
it
and
then,
when
we
see
when
we
see
the
the
power
security
standard,
so
I
thought
oh
maybe
this
is
a
good
way
to
solve
this.
We
we
were
having
trouble
finding
a
solution
for
this
problem,
so.
A
So
if
I
understand
correctly
like
if
we're
looking
at
the
page,
I
have
on
the
screen,
the
concern
is
right.
So
this
is
from
2019
so
two
years
ago,
whatever.
But
if
another
cv
like
this
is
discovered
or
if
you're
running
an
old
kernel
version
that
isn't
patched
with
this.
Theoretically,
the
kubernetes
api,
through
these
mechanisms,
gives
you
an
entry
point
to
cause
the
kernel
to
consume
a
malicious
volume
which
then
could
allow
you
to
execute
code
as
the
kernel
in
the
kernel.
A
And
thus,
if
you
know,
if
you
had
like
the
multi-tenant
cluster
or
whatever,
that
you
you've
broken
that
tenant
boundary
and
can
possibly
do
whatever
the
tubeless
is
able
to
do
environment,
yes,
and
so
it's
theoretical
in
the
sense
that
we're
unaware
of
any
current
cves.
D
B
Yeah,
if
you
can
add
to
the
doctor,
be
awesome
yeah,
so
then
can
maybe
look
at
this
other
yeah.
Okay,
then
also,
you
also
mentioned
like
class
level
control.
Maybe
if
somehow
we
have
some
parameter
in
cluster
scope
that
we
can
use
to
to
check
like
like
adding
some
another
parameter
at
a
classic
scope
using
that.
B
D
D
One
possibility
would
be
if
there's
a
trusted
component
that
is
involved
in
producing
these
backups
it
would,
it
could
be
given
access
to
annotate
or
set
a
field
on
the
volume
snapshot
content
to
say
this
is
even
though
this
was
produced
in
block
mode,
it's
okay
to
mount
back
in
file
system
mode,
so
it
it
seems
like
it
would
make
sense
for
the
snapshots
to
record
what
mode
they
were
produced
with
and
then
under
normal
circumstances.
D
B
D
B
I
see
yeah.
Maybe
we
can
explore
that
right.
That
would
be
relatively
straightforward.
Actually,
if
we
do
it
that
way:
yeah,
okay,
so
yeah.
So
we'll
look
into
those
two
two
suggestions.
B
Yeah
thanks
yeah.
So
if
you
have
any
other
comments,
please
you
know
add
to
that
dog.
B
So
yeah
they
are
we're
hoping
you
guys
to
review
when
we,
when
we,
you
know,
decide
which
approach
to
go.
A
Cool
anything
else
on
this
guys
before
we
do
the
discussion
topic.
You
know
comments.
A
So
I
just
added
this
one
discussion
topic,
so
I
starting
this
week
I
switched
the
weekly
triage
meeting
for
a
cigar
to
try
to
focus
on
123
stuff,
because
code
frieza
is
coming,
so
I've
just
been
focusing
on
that,
and
during
that
time
I
saw
a
bunch
of
stuff
with
the
new
paw
security
stuff,
and
I
just
wanted
to
get
an
update
from
jordan
kim
anyone
else.
Who's
been
working
on.
It
is
basically
is
there
anything
you
all
need
help
with.
D
D
We've
got
benchmarking
in
place
the
the
default
path
when
you
upgrade
an
existing
cluster
to
a
version
that
has
this
feature
enabled
where
things
are
privileged
by
default
and
pods
are
allowed,
is
very,
very,
very,
very
performant,
which
makes
me
happy,
basically
takes
a
couple
hundred
nanoseconds
and
like
one
extra
allocation
for
every
pot,
that's
admitted
so
the
impact
of
enabling
this
feature
on
existing
clusters
is
essentially
nil.
D
Tim
is
working
on
getting
the
metric
stuff
in
place.
Sam
was
able
to
get
the
webhook
version
building
and
an
image
built
and
manifests
put
together
so
that
people
can
install
this
on
existing
clusters
and
try
it
out.
That's
the
first
pr
in
that
list
there,
so
that
is
an
emerge.
Queue.
D
And
then
there
were
a
couple
other
things
listed
under
the
beta
items
around
like
prioritizing,
which
pods
we
check
in
the
namespace.
If
there
are
a
bunch
of
pods,
we
want
to
check
unique
pods
first
before
we
start
checking
like
all
the
pods
that
came
from
the
same
replication
controller,
but
generally
it's
on
track.
I
would
expect
the
code
work
to
wrap
up
this
week
or
early
next
week
and
then
the
docs
work
to
be
picked
up
to
update
for
the
docs
for
the
step
in
beta.
D
Any
of
these
they
needed
an
extra
review
or
anything,
or
are
we
good
right
now
everything
has
an
owner
and
a
reviewer.
I
think
probably
what
I
would
want
help
with
once
the
web
hook
once
the
web
hook.
Pr
merges
is
people
like
actually
trying
it
out
like
you.
Can
it's
a
two
two
line
installation,
so
you
make
certs
and
then
you
control
apply
and
that
installs
the
web
hook
into
an
existing
cluster
and
you
can
play
around
with
it.
D
D
A
A
A
So
you
guys,
if
I
remember
correctly,
though
you
successfully,
did
your
one
dollar
release
right?
Yes,
yes,
so,
if
I
understood
correctly,
you
guys
went
ahead
and
just
promoted
the
apis
to
v1.
A
Excellent,
well,
I'm
glad
that's
all
there.
I
think
I
think
a
lot
of
folks
have
been
waiting
for
that.
That's
exciting.
A
This
is
recently
jordan.
Can
you
remind
me
who
owns
this
feature?
Like
I
know,
segato
is
like
a
person.
D
Can
you
zoom
in,
I
can't
read
what
you're
looking
at
I
was
just
I
just
saw
sarasota
county
shore.
Discovery
was
failing,
michael
toffin
was
the
one
to
set.
That
up
is
that.
D
A
Well,
I
don't
think
we
have
anything
else
on
the
agenda.
If
anyone
wants
to
talk
about
something
real,
quick,
we
can't
otherwise
we
can
keep
everyone
back.
17
minutes.
A
See
you
all
in
two
weeks,
then
yeah?
Okay,
yes,.