►
From YouTube: Kubernetes SIG Security 20210909
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
As
usual,
we'll
give
a
moment
for
us
to
achieve
a
steady
population.
B
I
wanted
to
do
a
quick
introduction
before
fadi
he's
here
now.
He
reached
out
to
me
some
time
ago,
some
days
ago
about
getting
more
involved
with
kubernetes
security.
So
I
just
wanted
to
to
say:
hey
everyone.
We
have
a
new
friend
here
who
wants
to
participate
so
hi.
A
C
A
I'm
I'm
so
glad
to
hear
that
I
mean
honestly,
that's
a
lot
of
why
I'm
here
too,
because
I
love,
I
love
the
attitude
that
devops
has,
and
I
love
the
fact
that
when
you
combine
devops
with
security,
you
can
take
the
attitude
that
devops
has
and
use
it
to
break
more
things.
Use
it
to
harden
more
things,
use
it
to
detect
more
things
in
a
really
broadly
participatory
way,
and
that's
that's
absolutely
the
best
in
my
opinion,
so.
D
A
Hi
welcome
okay,
it
is
five
after
we're
gonna
officially
call
this
thing
started:
hello,
kubernetes,
sig
security.
I
am
so
so
happy
to
see
you
all
again.
We
have
a
few
things
on
the
agenda
here.
I
see
that
ray
has
volunteered
to
take
notes
super
grateful
for
that.
Thank
you
and
yeah,
let's,
let's
just
let's
just
jump
right
into
the
to
the
announcements
and
move
on
from
there.
A
So
audit
audit
sub
group.
E
What
do
you?
What
do
you
got
for
us?
Hello,
not
much?
That's
we're
still
on
and
we're
we're
close
to
to
doing
the
vendor
announcements,
but
I
will
have
a
pr
to
update
the
vendor
announcement
date.
Another
another
news,
the
roadmap.
We
have
a
roadmap
that
that
has
been
merged
and
for
those
who
are
new
or
have
who
have
newly
joined
the
past
few
weeks.
So
the
we
do
plan
to
have
more
frequent,
smaller
audits
in
the
future,
and
so
we
would
like
we
should.
E
A
That's
that's
great
news;
it
is,
you
know
it
is.
It
is
always
a
challenge
to
negotiate
a
contract
with
a
large
organization,
and
so
I'm
grateful
that
you
all
are
taking
the
time
to
really
do
a
good
job
of
that
looks
like
for
now
we'll
pass
over
docs
and
we're
do
you
want
to
tell
us
about
tell
us
about
what's
happening
with
tulane.
F
Okay,
I
I
think
this
is
good.
Now,
okay,
cool
all
right,
yeah,
so
welcome
new
and
existing
contributors.
One
potential
update
I
wanted
to
check
and
share.
Was
we
discussed
around
two
two
meetings
ago
that
we
are
going
to
have
or
try
to
attempt
to
create
a
automate,
automatically
refreshing
list
of
cvs
and
especially
the
fixed
ones,
so
that
it
is
easy
to
programmatically
pull
that
information
for
any
end
users
of
kubernetes
and
they
know
which
version
was
has
the
fix
and
which,
which
ones?
F
F
We
also
wanted
to
create
a
label
that
we
can
filter
on
for
all
of
those
github
issues
with
which
has
fixes
for
cvs
and
to
create
that
label.
One
of
the
things
requested
in
the
comment
was:
do
we
have
some
evidence
of
consensus
between
security
that
this
is
the
right
approach
as
well
as
src.
So
I
went
back
to
our
recordings
and
seemed
like
we
as
a
group
were
good
from
consensus
perspective.
F
But
I
don't
remember
if
we
have
a
confirmed
consensus
with
src
so
just
wanted
to
open
that
up
and
if
somebody,
including
tabby,
who
is
in
src,
is
there
want
to
hear
their
thoughts
and
discuss
if
needed.
A
Yeah,
thank
you
so
much
for
bringing
this
back
up.
I
felt
like
we
were
good
from
src
perspective,
but
given
the
two
hats
that
I
wear
there,
I
I
wanted
to
make
doubly
sure
that
it
wasn't
just
what
I
thought
so
I
have.
I
have
brought
this
up
with
the
rest
of
the
src,
and
so
I
figure
if,
if
sig
security
folks
would
like
to
go-
and
you
know
let's
just
heap
lgtms
onto
that
and
then
by
the
number
of
rlgtms
we
can.
A
We
can
show
that
there's
that
there's
broad
support
for
it
and
then
and
then
similarly
from
the
src
side,
you
know
we
will
we
will.
We
will
make
sure
that
we
are
comfortable
with
it,
as
I
believe
we.
B
A
F
Okay,
yeah,
that
will
should
work.
I
like
approvals
through
lgtm
approach.
That
gives
the
evidence
in
place
where
the
request
is
made,
so
that
should
work
next
update
is
we
are
starting
to
work
on
container
image
scanning
similar
to
how
we
did
for
go,
build
nia
is
working
who
we
met
last
week
last
time
on
it.
F
We
have
a
work
in
progress,
pr
discussing
with
sig
release
on
how
they
want
to
tackle
it.
There
are
some
results
that
are
not
clean,
like
we
would
have
for
go
bills,
so
I'm
working
with
them
to
save
put
ideally
to
fix
them
before
we
can
merge
it
and
then,
after
that,
we'll
have
two
places
where
we'll
be
able
to
find
any
known
vulnerabilities
once
it's
merged.
So
any
reviews
on
that
pr
from
anyone
who
is
especially
familiar
with
bash,
because
there
is
a
lot
of
bash
involved.
There
are
welcome
now.
F
Both
me
and
me
are
new
to
or
not
very
expert
in
bash,
so
anything
you
can
share
would
be
great,
but
otherwise
that's
it
from
tooling.
E
Just
a
quick
clarification
on
that
one.
So
this
current
work
in
progress,
one,
that's
the
golang
database
or
scanning
against
the
go
link
database
for
vulnerabilities
right
both.
F
All
right,
okay,
good
thing
I
was
wearing
pants,
so
the
main
thing
is
the
issue
or
the
pr
that's
open,
both
the
gobel
that
we
did
last
time
was
for
snake,
and
this
one
is
also
with
snake
container
image
scanning
the
one
that
we
had
a
demo
from
google
go
team
that
one
is
actually
something
they
wanted
feedback
and
I
think
the
tool
is
still
more
like
alpha
state
in
terms
of
maturity,
so
they
have
gotten
our
feedback.
F
Based
on
our
experience,
I
think,
as
they
mature
more
we'll
probably
be
in
a
state
to
see.
Do
we
want
to
add
on
top
of
what
we
have
with
that
go
database
tooling,
which
I
think
they
call
go
one
scanner
or
something
like
that.
F
A
I'll
just
say
I
I
think
this
is
great
and
I'm
super
grateful
for
everybody
who's
working
on
it,
especially
because
this
sort
of
thing
this
sort
of
vulnerability
scanning,
I
feel
like
it's
super
valuable,
but
also
has
a
lot
of
built-in
challenges,
because,
like
fundamentally
it's
impossible
to
do
it
perfectly,
and
so
I
really
appreciate
the
fact
that
everybody
has
been
so
eager
to
to
try
things
to
to
discuss
what
the
results
look
like.
A
What
the
reporting
should
look
like
all
of
all
of
these
things,
because
I
think
it's
a
good,
I
mean
first
off.
I
think
it's
good
for
us
just
as
a
project
to
be
able
to
handle
these
things
in
a
way
that
is
sustainable
and
risk-based
and
meaningful,
but
also,
I
think,
it's
important,
as
as
an
example
to
show
like
that.
There's
a
whole
range
of
of
these
sorts
of
things
from
like
yeah
won't
fix
to
like.
Actually,
this
is
a
real
big
deal.
Thank
goodness
the
the
scanner
found
it
for
us.
F
A
Anybody
anybody
else
have
feelings
about
vulnerability,
scanning
vulnerability,
scan
reporting.
When
I
ask
any
questions
about
this
or
anything.
A
All
right
can
you
can
you
also
share
with
us
about
the
self-assessment.
F
Yes,
not
the
huge
update,
but
robert
me
ray
and
some
others
met
with
the
maintainers
of
cluster
api.
F
So
tentatively,
what
we've
decided
is
early
january
or
so
we'll
wrap
up
the
assessment
and
for
that
in
the
next
two
three
weeks
we
are
going
to
do
a
lot
of
homework
and
reading
on
our
site,
because
closer
api
folks
know
their
project
much
better
than
us
and
then
first
week
of
october,
just
before
kubecon
we'll
meet
and
do
a
deep
get
a
deep
dive
session
from
ankita.
Who
is
one
of
the
maintainers
and
contributors
to
cluster
api.
F
So
she'll
share
all
the
data
flows
and
all
the
things
that
help
us
model
threats
for
cluster
api
and,
if
needed,
we'll
do
consider
more
some
more
sessions
in
a
short
span
of
time,
get
all
our
ducks
in
one
row
and
then
start
actually
adding
the
content
and
then
start
really
reviewing
it.
Maybe
potentially
a
pr
by
that
time,
when
the
repos
are
more
created
so
yeah.
So
that's
it
robert.
If
you're,
there
have
anything
else,
please
feel
free
to
add.
H
No,
I
think
you
covered,
certainly
from
my
recall,
it
covered
the
main
points.
A
That
sounds
that
sounds
great
yeah.
Keep
us
keep
us
all
in
touch
with
how
that's
going
and
also
you
know.
If
you
get
to
places
where
either
you
wish,
you
had
more
specific
expertise
or
or
the
inverse
of
that
places
where
you
wish.
You
had
more
input
from
somebody
who
is
new
or
or
any
of
those
sorts
of
things
where
there's,
where
there's
an
opportunity
that
somebody
could
could
throw
in.
I
A
A
Thank
you
going
going
back
to
something
that
was
mentioned
in
passing
there.
We
have
this
issue
open
to
create
a
repo
for
putting
our
miscellaneous
deliverables
and
procedures,
and
you
know,
essentially
all
of
the
non-versioned
things
that
we
have
that
are
not
specifically
related
to
sig
life
cycle
that
that
issue
has
been
open
for
a
little
while
it
looks
like
the
folks
from
contrib
x
have
had
a
look
at
it.
There
was
some
good
discussion
in
there
about.
A
Actually,
where
do
we
want
this
to
be
so?
I
have
some
thoughts
here
which
I
have
put
in
the
in
the
doc.
It
looks
like
the
others.
The
other
similar
repos
are
in
kubernetes,
rather
than
kubernetes
sigs,
and
so
you
know,
based
on
our
original
discussions,
we
had.
A
One
thing
that
that
I
noted
about
that
immediately
is
that
kubernetes
security
already
exists,
and
so
I
I
I
asked
the
rest
of
the
src
what
they
think
about
doing
something
to
disambiguate
that,
but
this
is
all
just
what
I
think
and
I
hope
it's
helpful.
This
is
why
I
do
things
for
for
all
of
the
rest
of
us,
but
I
wanted
to
make
sure
that
that
we
brought
it
up
here
so
other
other
thoughts
on
this,
like
what
what
color
do?
We
want
to
paint
this
bike
shed.
E
F
Another
perspective
is
pro
jobs
today:
allow
transfer
of
issues
and
prs
between
repose,
but
in
the
same
org.
So
most
of
our
current
issues
and
prs
are
in
k
community
and
if
we
had
to
create
repo
in
kubernetes
6,
it's
going
to
be
really
difficult
to
do
that.
So,
in
that
case,
having
it
in
kubernetes
org
would
maybe
massively
help
be.
A
An
enabler
there
I
I
like
I
like
that.
A
lot
well
and
like
we've,
been
talking
about
one
of
the
advantages
of
having
a
having
a
repo
is.
We
could
use
it
to
build
out
a
shared
project
board
as
like.
F
A
Yeah
so
yeah,
I
I've
asked
the
I've
asked
the
rest
of
the
src
about
that,
because
it
seems
like
what
we're
using
k
security
for
is
essentially
this
same
thing
that
we're
talking
about
k-sig
security
being
for,
but
for
the
src,
which
means
that
I
don't
imagine
that
there
are
heaps
of
outbound
links
into
the
like
src
process
documentation.
A
A
Okay,
we're
going
to
call
that
one
we're
going
to
call
that
one
good
then
next
thing
we
have
here
is
from
from
robert
who
wants
to
bring
in
a
presentation.
Tell
us
a
little
bit
more
about
this.
Please.
A
I
moved
it
into
the
discussion
area,
but
but
you
you
put
it
on
the
dock
and
I
appreciate
that.
H
Fantastic,
I
I
so
matthias-
and
I
don't
know
if
he's
on
but
matthias
left
from
salesforce
had
generously
shared
their
treaties
report
and
now
it's
all
public,
so
I
can
speak
publicly
about
it
and
he
has
volunteered
to
present
to
this
group.
I
I
assume
there's
interest,
but
I
wanted
to
get
you
know,
consensus
from
the
group
and
he
was
going
to
discuss
their
work
with
the
treaties,
their
their
reasoning
and
the
findings
and
thoughts
about
multi-tenancy
in
salesforce.
I
well
in
the
context
of
salesforce,
but
with
kubernetes.
H
Any
other
opinions,
positive
or
negative.
A
I'd
say
plus
one
to
that
in
a
in
a
purely
hypothetical
way.
A
The
first
concern
that
I
would
have
with
something
like
this
would
be
about
it
getting
vendory
advertisery
that
sort
of
thing
in
a
way:
that's
not
really
community
supportive,
but
in
this
particular
case
I
am
not
in
any
way
concerned
about
that,
because
we've
already
read
the
report
they've
already
demonstrated
that
they
have
a
really
great
concern
for
you
know
bringing
this
bringing
this
sharing
it
with
the
community
being
being
supportive
of
this,
and
so
I
am
I'm
I'm
I'm
excited
about
this.
I
think
it'd
be
really
cool.
D
H
He
said
I
think
I
put
the
day
tonight
so
he
wasn't
available
today
and
obviously
it
was
kind
of
too
early
to
get
consensus
and
I
think
the
next
date
would
be
the
23rd.
Is
that
correct,
I
believe
so
yeah?
So
I
think
he
said
that
he
was
available
so
I'll
give
him
the
good
news
and
firm
up
that
yes,
he's
still
available
we're
we're
ready,
willing
and
happy
to
receive.
A
That's
that
that's
wonderful
yeah,
then,
then,
after
that
we
can,
we
can
talk
about
what
to
do
with
it
like,
depending
on
depending
on
how
he
feels
we
might
even
consider
doing
something
like
trimming
out
that
section
of
the
recording
and
putting
it
in
another
place.
That
is,
that
is
easier
for
folks
to
get
to
or
something
see
how.
A
Well,
this
is
this
is
great.
Thank
you
so
much
one
other
thing
that
I
wanted
to
bring
up.
A
It's
a
big
pr.
It's
been
open
for
a
long
time,
there's
a
very
there's,
a
very
long
discussion
thread
on
there,
and
so
I
was
thinking
that
it
would
help
the
process
to
move
along
if
we
would
merge
this
cap
marked
provisional
and
then
sort
of
re-begin
the
the
discussion
about
what
it
would
take
to
market
as
implementable.
A
This
is
this
is
again
something
that
that
I
am
happy
to
share
my
thoughts
on,
but
my
thoughts
are
only
mine
and
so
here's
here's
the
rest
of
the
group.
What
do
we?
What
do
we
think
about
that
possibility?.
F
Maybe
everyone
else
is
aware,
so
I'm
going
to
ask
a
newbie
question.
Yes,
please
what.
F
Questions:
what
is
the
difference
between
provisional
merging
and
implementable
merging
for
a
cap
or
unless
I'm
mis,
remembering
the
terms
used.
A
A
It's
I
mean
based
on
based
on
the
the
name
of
things.
I
I
like
the
idea
of
provisional
because
it
it
is
explicitly
saying
that
that
it's
not
ready
to
go
tim,
I'm
guessing
that
you
turned
on
your
camera
in
order
to
provide
some
historical
context
here.
J
I
don't
know
about
historical
context,
but
I
have
some
thoughts
in
the
way
that
I've
seen
provisional
use
before
my
biggest
concern
with
merging
something
as
provisional
is
you
tend
to
lose
all
of
the
conversation
history
and
context
around
it,
because
you
don't
have
that
all
the
common
threads
open
on
the
pr
and
I've
seen
that
this
strategy
used
before
where
something
gets.
J
You
know
there's
kind
of
a
lot
of
back
and
forth
on
the
pr
and
we
say
well,
let's
merge
what
we
have
and
hash
out
these
things,
and
so
the
pr
gets
merged
as
provisional
and
then
gets
sort
of
dropped
for
a
little
while
and
everyone
loses
context
on
what
those
unresolved
things
are
and
then
at
some
point
it
just
sort
of
gets
kind
of
pushed
forward
again
without
that
original
context.
J
So
I
guess
what
I
would
like
to
see
before
this
gets
merged
as
provisional
is
to
make
sure
that
anything
that's
unresolved
is
clearly
marked
as
unresolved
in
the
kep.
We
have
those
unresolved
tags
for
it
so
kind
of
block
that
around
anything
that
has
concerns
and
then
also
kind
of
list
out
in
that
unresolved
section.
These
are
the
considerations.
These
are
the
alternatives
proposed.
These
are
the
concerns
so
that
we
make
sure
that
we're
kind
of
carrying
that
context
forward.
J
And
then
we
use
this
strategy
in
we're
working
on
pod
security
admission
and
one
of
the
things
we
did
there
as
well
was
in
the
graduation
criteria
section
we
documented
for
alpha
like
before
this
goes
into
alpha
implementable
state.
These
are
the
sections
that
need
to
be
resolved,
and
you
know
maybe
there's
additional
ones
that
you
know
can
be
punted
to
beta
or
something
like
that.
J
Yeah,
I
think
the
latter,
I
think,
as
long
as
as
long
as
we're
clear
about
like
what
the
unresolved
sections
are
and
and
pulling
that
context
forward,
I
think
it
can
be
a
useful
tool
to
to
merge
that
and
kind
of
focus
the
conversation
on
an
individual
decision.
There.
A
But
thoughtfully
yeah
anybody.
Anybody
else
have
feelings
from
having
gone
through
long.
You
know
long-standing
cap
pr
processes
that
that
had
a
lot
of
discussion
in
them,
either
for
or
against
provisional
merging.
A
Well,
fair
enough,
I
I
appreciate
that
that
input
tim
and
I
suppose,
we'll
take
this
to
github,
then.
A
So
those
are,
those
are
all
the
things
that
we
had
written
down,
but
it
isn't
necessarily
all
the
things
so,
as
we
always
do
this
is
this
is
our
space.
This
is
our
time.
Does
anybody
have
anything
that
they
would
like
to
share
anything
that
anybody
has
been
has
been
thinking
about,
half-baked
ideas,
fully
baked
ideas,
things
that
kept
you
up
at
night
recently.
K
E
Well,
since,
with
my
release,
hat
on
I'm
doing
for
123.
enhancements
freezes
later
this
evening
at
11,
59
p.m.
In
terms
of
cube,
username
space
supports.
E
K
K
B
E
A
cap
as
well
for
a
cup
to
be
part
of
release.
It
has
to
be
implementable
and
also.
B
E
To
have
the
cap,
this
cup
merged
as
well
also
there's
other
factors
as
well.
I
also
have
graduation
criteria
had
to
go
through
your
production,
ratings
review
and
a
few
other
things,
but
it
looks
like
this
cap
at
its
current
state
is
not
ready
for
a
release.
E
Yet
since
this
cap,
since
pr
has
been
been
open
since
2020.
K
Sure,
and
and
that's
great
so
I
guess
what
in
order
to
move
it
forward
kind
of
need
to
find
people
who
are
willing
to
work
on
it,
then
yeah,
okay,
thank
you,
and
that
is
that
is
the
right
cap
number,
let's
see
kept
127.
Is
that
what
you
said.
A
Question
there
about
user
namespaces
in
container
runtimes
like
let's
take
this
down
one
level
I
have,
I
have
always
been
filled
with
desire
to
use
user
name
spaces
in
in
container
runtimes,
but
things
like
dealing
with
file
ownership
on
the
unpacked
container
images
you
know
has
has
been
a
problem
over
and
over
with
a
bunch
of
different
solutions.
A
K
K
So
so
I
tell
you
what
I
I
can.
If
it's
of
interest
to
this
group,
I
will
you
know
kind
of
pull
some
people
and
and
see
if
there
are
additional
red,
hatters,
who'd
like
to
to
or
or
other
folks
in
the
community.
If
this
group
is
interested,
I
can
I
can
pull
in
some
people
with
deeper
expertise
than
I
have
for
and
get
us
get
it
on
the
agenda
at
some
point.
K
K
I'll
get
I'll
get
some
folks
lined
up,
and
if
anybody
else
has
experts
nose,
experts
in
the
area,
you
know
by
all
means
pile
on,
but
I'm
I'm
happy
to
to
get
some
of
that
lined
up
and
just
some
background.
You
know
kind
of
the
folk.
K
The
places
where
it's
coming
up
in
our
in
our
customer
base
in
particular
is
telco,
because
so
many
telco
workloads
today
run
as
root,
and
it's
a
lot
of
work
for
them
to
make
the
changes
they
should
be
making
to
be
more
container
native
and
and
so
username
spaces
gives
a
level
of
protection
would
give
a
level
of
protection
that
they'd
be
looking
for,
so
well
go
ahead.
No.
I
I
was
gonna
say
one
other
thing
on
the
on
the
rootless
thing:
the
there's
the
cat
that
did
get
merged
1371,
which
was
for
alpha
support
for
running
rootless
mode,
which
kind.
I
L
I
K
K
Take
another
look
at
it.
I
I
had
assumed
it
was
primarily
the
the
control
plane
you're
right
about
runtime,
the
the
challenge
is
right:
these
are
user
space,
so
services
that
are
deployed
to
the
cluster
and
they
want
to
change
their
uid
after
they're
up
and
running
again,
it's
it's
right.
It's
an
old-style
architecture
that
they
just
haven't
necessarily
updated.
K
I'm
not
sure
that
you
know
we're
trying
to
convince
them
to
do
it
differently,
so,
but
I'll
take
another
look
at
that
one.
Thank
you.
A
I
would
love
to
ask
a
clarifying
question
about
that
because,
like
I
guess
this
is
a
thing
as
a
as
a
sort
of
old-school
unix
person
who
still
thinks
of
munich
as
being
a
shared
system
that
is
designed
for
having
multiple,
mutually
distrusting
unprivileged
users,
so,
like
user
namespaces,
seem
good
and
also
like
doing
the
thing
that
unix
was
originally
designed
for
again,
but
in
a
different
way
and
and
so
like,
since
you
have
real
world
folks
who
are
who
want
this.
A
K
Is
that
the
case?
Well,
the
second
part
is
what
we're
actively
investigating
with
them
right
is.
This
is,
is,
is
my
assertion
that
they
could
re-architect
what
they're
doing
in
such
a
way
that
this
they
no
longer
need
to
run
as
root.
K
We're
testing
that,
with
these
with
these
application
teams
kind
of
at
the
moment
to
see
whether
is
this
really
just
a
transitory
phase,
which,
unfortunately,
in
telco,
can
be
a
couple
of
years
of
transition
right
or
is
there
something
more
that
that
that
means
that
they,
that
the
application
absolutely
must
have
this
and
and
so
we're
kind
of
actively
investigating
that
and
the
question
around
user
name
space
or
was
namespace,
was
seen
as
a
mitigation
for
the
current
state
of
their
applications.
A
Is
still
more
privileged
than
a
non-root
user,
fair
yeah,
yeah,
yeah
agreed
and
so
yeah
like
when
I
think
through
username
spaces,
the
the
sorts
of
things
that
I'm
thinking
about
are
either
you
you
are
running
is
root
because
you
didn't
build
the
container
image
or
because
there
are,
you
know,
legacy
reasons
why
you
can't
solve
file
permission
problems
inside
your
container,
but
you
don't
actually
want
to
be
rude
on
the
host.
So
therefore,
stick
in
a
username
space.
A
We
can
ignore
all
our
file
permissions
inside
the
container,
while
also
not
being
rude
on
the
host
and-
and
I
think,
that's
kind
of
what
you're
getting
at
with
the
like
transitory
phase
thing.
Where,
in
principle,
you
could
change
your
container
image
to
not
need
to
be
root
and
just
get
to
get
all
the
file
permissions
right,
but
in
practice
that
could
be
really
hard.
K
Or
it
could
take
a
long
time
and
and
they're
you
know:
they've
containerized,
the
workloads
already,
etc,
yeah
and
and
I'd
be
interested.
If,
if
this
group
thinks
that
user
namespaces
is
not
a
useful
thing
for
kubernetes,
it
would
be
super
helpful
to
understand
more
and
and
that
can
be
offline
if
it's
not
a
grouped
conversation,
but
to
to
understand
more
kind
of
why
that
team,
why
this
group
thinks
that,
because,
while
I
have
this
concrete
use
case
from
the
telco
environment
right
now,
I
have
customers.
K
K
K
My
customers
understand
it
and
help
my
team
too
think
through
what
you
know.
What
are
the
other
mitigations
that
would
provide
the
things
that
our
customers
think
are
being
provided
by
user
name,
spaces.
K
J
I
yeah
feel
free
to
reach
out
to
me
on
slack,
I
know
a
handful
of
people
in
the
community
who
have
looked
into
this.
I
think
that,
like
aside
from
the
I,
if
you're
able
to
re-architect
your
application
so
that
it
runs
as
a
non-root
user,
I
think
you
the
benefits
of
username
spaces
or
maybe
a
little
more
marginal,
but
I
think
there's
a
ton
of
examples
of
legacy.
J
Workloads
work,
third-party
workloads
that
you
don't
control
like
a
lot
of
different
reasons
that
you
might
not
be
able
to
do
that,
and
I
think
that's
where
username
spaces
are
valuable.
I
also
think
that
the
benefit
of
remapping
non-root
users
to
like
segmented
spaces
is
maybe
a
little
marginal.
J
And
just
kind
of
another
option
that
we
have
today
in
kubernetes
is
running
in
a
micro
vm,
where
you
can
run
as
rev.
K
Yeah,
and-
and
that
is
something
that
we've
also
been
talking
about
now-
there's
a
nested
virtualization
challenge
there.
If,
if
you're
not
running
on
bare
metal.
H
A
Yeah,
who
has
who
has
dreams
about
user
namespaces
like
like
kind
of
kind
of
jumping
off
of
what
tim
was
saying?
I
think
I
think
that
user
namespaces
is
socially
interesting
as
a
security
control,
because
I
think
we're
all
used
to
the
idea
that
all
security
controls
are
applicable
all
the
time
and
so
like.
A
They
feel
like
a
really
useful
tool
in
certain
situations,
but
like
that
they
are
solving
similar
challenges
to
other
things
like
re-architecting,
your
application
just
in
a
different
way-
and
I
wonder,
if
that's
part
of
where
part
of
where
this
this
kind
of
disconnect
comes
from-
is
the
username
spaces
exist,
but
it,
but
they
aren't
necessarily
a
everybody-
should
do
this
all
the
time,
because
it's
always
good
sort
of
thing,
as
opposed
to
like
non-executable
stack,
which
I
actually,
I
guess,
non-executable
stack
had
a
lot
of
had
a
lot
of
uptick
problems
when
it
first
came
out
too,
because
there
was
app,
there
was
code
written
that
that
ran
code
off
the
stack
on
purpose.
I
I
think
the
thing
I've
seen
with
username
space
is
like
in
terms
of
pushback.
It's
people
say
the
complexity.
So
whenever
I've
seen
like
discussions
online
you'll
see
use
it
the
push.
So
I
think
almost
it's
like
a
kind
of
if
it
was
if
it
was
clearer
to
people
where
the
risks
lay
in
your
username
spaces,
which
I
don't
think
is
a
super
clear
thing.
Like
I've
heard
some
people
say:
oh
use,
namespaces.
They
just
love
complexity.
I
That
increases
the
attack
surface,
so
you're
losing
as
you're
gaining
yeah
you're
gaining
in
some
ways,
but
you're
losing
in
increased
attack
surface.
But
I
don't
think
there's
a
or
I've
not
found
like
a
really
clear
articulation
of
what
are
you
losing
like
when
we
say
attack
servers?
What
we
talk,
what
you
know
is
there
some
concrete
things
that
we
can
point
out
so
any
anyone's
any
information
that'll
be
super
interesting
to
see
because
it's
something
not
something
I've
found
yet.
I
K
No,
no
I've,
just
I
just
gonna,
plus
one
that
and
just
say
you
know
I've.
I've
had
success
over
time
to
your
point.
Tabitha
about
you
know.
People
assume
that
if
there's
a
security
policy,
it
should
be
applied.
It's
like
there.
There
was,
you
know,
a
long
time,
never
use
a
self-signed,
cert
right
and
always
use
corporate
cas
and
and
then
the
whole
process
around
generating
assert
for
your
applications
from
the
corporate
ca
takes
so
much
time
and
they
weren't
designed
for
automation,
and
so
you
know
kind
of
now.
K
K
It's
sort
of
like
there
were
a
set
of
assumptions
made
when
they
were
implemented
for
the
runtime
that
that
I'm
hearing
questioned
here
and-
and
you
know
just
like
in
docker
when
docker
implemented
them
and
and
so
I'd,
plus
a
one
to
rory
on
I'd
love
to
understand
more
about
you
know.
What's
the
reality.
B
A
At
the
list
of
local
privilege
escalation
cves
that
have
predominantly
affected
distros
that
ship,
with
the
setting
that
allows
unprivileged
users
to
create
user
namespaces
by
then
exposing
different
parts
of
kernel
api
that
would
normally
only
be
exposed
to
root
to
everybody
by
letting
everybody
put
on
like
that,
rubber
horse
face
mask
and
be
like
yeah,
I'm
really
rude.
Let's
go,
I
want
to
make
a
packet
socket
or
or
whatever
so
like.
I
A
A
Was
you
needed
to
run
untrusting
things
as
different
unprivileged
users
and
like
without
user
namespaces,
even
if
the
container
images
that
you're
running
aren't
assuming
that
they're
running
as
root
generally
there's
an
assumption
that
they're
running
as
some
particular
user
id
baked
into
the
process
of
building
that
container
image,
because,
like
think
about
when
you're
setting
up
a
standalone
server,
if
you
want
to
run
a
particular
daemon
as
a
particular
unprivileged
user,
then
like
temp
directories,
for
that
damon
have
to
be
owned
by
that
user
and
not
writeable
by
anybody
else
and
like
secret
files
for
that
damon
need
to
be
owned
by
that
user
and
not
readable
by
anybody
else
and
and
all
of
those
sorts
of
things,
and
so
like
from
a
not
exposing
kernel
surface
that
you
don't
need
to
expose
standpoint
like
tim
was
saying,
then
then
remapping
root
is
kind
of
the
the
primary
advantage
of
of
username
space.
A
If
that
were
the
case,
we
wouldn't
be
so
worried
about
not
running
his
things
as
root.
That's
probably
true,
but
we
would
still
have
the
issue
that
if
you
were
running
n,
mutually
distrustful
workloads
on
a
node
and
a
workload
broke
out
of
the
container
onto
the
node
that
container
breakout,
if
we
assume
that
it
preserves
user
id.
Now
your
user
id
1000
on
the
node,
which
is
unprivileged,
but
since
all
the
other
containers
are
also
running
all
asterisk.
The
other
containers
are
also
running
as
1000
like
that.
A
Unprivileged
user
1000
still
has
a
fair
amount
of
privileges
because
it
can
still
rifle
all
of
the
other
default
containers
so
like.
Even
if
we
weren't
running
as
root,
if
user
namespace
was
totally
easy
and
totally
transparent,
and
the
container
runtime
could
make
an
ad
hoc
remapping
at
the
container
startup
time
for
every
single
container,
then
it
would
be
like
a
defense
in
depth
measure
where,
like.
If
I
break
out
of
of
a
certain
container,
that's
running
as
user
id
1000.
A
I
cannot
immediately
rifle
through
all
the
files
of
every
other
container,
that's
running
as
uid
1000,
because
I
only
think
I'm
running
as
user
id
1000
every
single
one
of
us
is
actually
running
as
a
different.
Randomly
selected
unprivileged
user
id
and
like
that
in
a
weird
way,
gets
us
back
to
the
using
unix
as
a
multi-user
system.
A
D
D
I
missed
the
first
minutes,
so
that's
why
I
didn't
introduce
myself.
Forgive
me
that,
as
you
can
see
my
background,
I'm
a
red
hatter
and
some
folder
from
my
teammate
suggested
me
to
join
this
group.
So
yep,
I'm
here,
forgive.
D
A
D
A
D
D
A
A
And
if,
if
anybody
wants
to
start
a
wants
to
start
a
discussion
about
username
spaces,
slack
is
open.
24
7
for
that.