►
From YouTube: Kubernetes SIG Security 20220505
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody,
it
is
five
afters.
We
are
going
to
officially
declare
the
start
of
another
kubernetes
sig
security.
It
is
wonderful
to
see
everybody
here
and
we
have
a.
We
have
a
few
things
on
the
list
here.
So
first
do
do
little
little
introductions,
I'm
I'm
tabitha
sable,
I'm
one
of
the
co-chairs
and
I'm
still
just
delighted
that
I
get
to
help
to
hold
this
space
so
that
we
can
attack
things
and
make
kubernetes
safer.
C
Hi
I'm
pushkar
jograker.
I
still
have
to
get
used
to
pronouncing
my
last
name
correctly.
I
pronounce
it
like
the
english
would
do
most
times,
which
is
the
wrong
way,
but
anyway,
I
am
lead
on
the
six
security,
tooling
sub
project
and
try
to
make
my
and
other
people's
dreams
about
making
kubernetes
more
secure
come
true.
D
All
right,
I'm
rory,
I
go.
I
him
I'm
grimacing,
because
my
cat
is
trying
to
eat
my
hand,
which
is
what's
happening.
D
She's
currently
trying
to
eat
my
hand,
which
she
thinks
is
a
great
idea,
I'm
not
perhaps
quite
surprised.
E
F
I'm
tommy,
I
give
it
again
and
yeah
I'm
I'm
kind
of
ramping
up
on
some
other
efforts
around
here
and
nice
to
see
everyone
again.
G
G
H
Then
let
me
go
next.
Hi,
I'm
benjamin,
I'm
an
I.t
security
student
from
germany
and
also
kind
of
let's
say
kubernetes
security
consultant
for
some
companies.
I
C
Hello,
I'm
mohit,
I'm
a
security
consultant
from
what's
good.
F
D
A
A
All
right,
as
as
we
do,
let's
start
with
hearing
about
what
has
been
going
on
in
subprojects
ray,
is
not
able
to
make
it
today,
but
I
will
share
his
share.
His
update
from
the
audit
sub
project
had
a
pre-kickoff
meeting
with
ncc
and
agreed
that
the
pre-kickoff
meeting
is
the
kickoff
meeting,
and
so
now
some
of
the
some
of
the
logistical
parts
are
coming
together
to
be
able
to
make
the
audit
happen.
A
A
I
mean
sometimes
sometimes
things
make
it
hard.
I
will.
I
will
report
on
behalf
of
tsavita
from
docs
as
well,
which
is
at
last
meeting.
A
They
were
talking
about
changing
their
meeting
times
because
their
meeting
times
end
up
being
at
the
same
day
and
right
after
the
main
sig
meeting,
and
that
is
hard
if
you're
going
to
be
going
to
both
of
them
and
so
soon
expect
to
see
a
doodle
poll
link
in
the
slack
channel
and
docs
is
working
together
with
sig
docs
on
psp,
deprecation,
tracking
documentation
updates
to
make
sure
that
you
know
nobody
is
being
told
within
the
documentation
any
longer
to
go
and
use
features
that
no
longer
exist.
So
I
think
that
that
is
a.
C
Tooling,
yes,
so
no
new
updates.
Really.
There
are
a
couple
of
outstanding
comments
on
the
cap3203,
so
I'm
gonna
wait
for
a
week
after
making
the
updates
and
then
with
lazy
consensus,
we'll
just
resolve
the
threads
and
then
I
think
you've
already
approved
tabby,
so
only
approval
would
that's
missing.
Maybe
is
from
elena
for
the
production
readiness
review,
but
she's
already
reviewed
it,
and
she
mentioned
it's
really
not
relevant
for
brr.
So.
A
C
A
A
C
So
that's
that's.
What
is
going
on
in
tooling
I'll
continue
on
the
self-assessments,
and
you
can
ask
questions
together
if
you
want,
if
anyone
has
any
for
self-assessment,
so
I
really
went
deep
again
into
looking
out.
What's
really
remaining
for
the
cluster
api.
Self-Assessment
so
looks
like
we
have
about
62
recommended
mitigations
and
I
think
there
are
about
22
threats
or
so
so
out
of
those.
C
Now
we
have
tracking
issues,
which
was
the
main
idea
that
we
should
know
that
we
are
working
on
things
we
discovered
from
the
threat
model
and
the
assessment,
so
out
of
those
20
are
missing
tracking
issues
now.
So
my
next
step
is
going
to
be
creating
relevant
issues
for
that
in
cluster
api
repo
working
with
the
sick
cluster
life
cycle
folks,
and
then
we'll
link
it
back
to
the
assessment
markdown
after
which
we'll
mark
the
pr
for
assessments
ready
for
review,
and
then
you
all
can
jump
in
take
a
look
and
share
more
feedback.
A
Is
the
is
the
creation
of
these
tracking
issues,
something
that
would
be
a
good
opportunity
for
someone
who
is
looking
for
a
way
to
help
out
that
has
a
relatively
gentle
on-ramp
or
does
it
have
pretty
detailed
needs
to
understand
context.
C
Actually,
it
might
not
be
a
bad
idea.
What
I
can,
because
most
of
the
stuff
that
will
be
part
of
issue
description
is,
can
be
copied
directly
from
the
markdown,
because
that's
what
we
want
to
fix
and
it's
already
defined
in
the
markdown
of
the
self-assessment
so
yeah,
I'm
I'm.
I
can
create
one
issue
also
or
point
out
to
an
existing
one
we
have
created
and
if
you
follow
the
same
kind
of
format
and
pattern,
we
can
get
all
those
issues
into
the
repo
and
then
yeah.
That
would
be
nice
contribution
for
everyone.
C
So
anyone
listening
now
or
in
the
recording
reach
out
on
six
security,
es6
security
channel
and
then
tag
me
as
pj
and
then
we
I
can
help
out.
A
C
A
All
right,
then:
let's,
let's
talk
about
pause.c.
H
Yeah,
so
I
wasn't,
let
me
give
you
a
bit
of
background
information.
I
was
involved
in
a
bigger
project
and
we
are.
H
We
were
building
a
larger
kubernetes
cluster
across
several
cloud
zones
and
we
use
gke
for
it,
and
some
gke
has
a
concept
where
they
use
pause
containers
in
every
pot,
so
they
spawn
a
container
in
every
pot,
which
is
called
the
pause
container,
and
this
pause
container,
for
example,
handles
zombie
processes,
and
I
had
a
short
look
in
a
vanilla,
kubernetes
installation
and
noticed
that
there
is
or
that
there
are
no
such
pulse
containers
when
I
run
vanilla
or
a
native
kubernetes,
and
the
thing
I
want
to
know-
or
maybe
also
discuss,
is
isn't
this
kind
of
a
security
tool
we
could
use,
because
I
think
if,
for
example,
a
valid
a
valid
attack
should
be,
if
I'm
a
company
and
I'm
running
my
kubernetes
cluster
and
I'm
using,
for
example,
some
project
which
is
open
source
and,
let's
say
also
badly
coded
and
produces
zombie
processes.
D
So
I
think
and
then
please
everyone
else
correct
me
with
with
their
experiences,
because
maybe
me,
as
far
as
I
know,
cuba
em
definitely
does
use
pause.
I
just
checked
one
of
my
clusters
and
I'm
seeing
pause
on
there,
but
it
may
be
that
certain
destroyers
do
and
don't
I
don't
know
which
ones
you
looked
at,
but
I've
I've
got.
I've
just
got
a
little
qbm
cluster.
So
that's
the
one.
I
can
check.
I
It's
just
there's
just
no
way
you
can
avoid
it
and,
and
all
a
pod
is,
is
really
just
a
collection
of
containers
that
share
the
same
linux
name
space
and
in
kubernetes
the
pause
container
is,
I
mean
I've
never
seen
it
not
there,
but
it's
the
one
that
holds
down
the
you
know
essentially
anchors
the
linux
name,
space
and
the
c
groups
as
well
so
yeah
it
has
to
I
mean,
I'm
surprised
you
don't
have
it,
but
you
have
to
have
a
you
have
to
have
a
pid
one
somewhere
in
your
pod.
H
So
it's
normally
the
standard
in
kubernetes
that
you
have
a
pit
one
in
your
container.
That
has
a
handler
for
the.
What
signal
is
it.
J
K
One
quick
thing
here
is:
there
is
a
pid
namespace
paused
is
probably
in
a
different
pid
namespace
than
the
workload
container
itself,
so
there'd
be
there's
very
likely,
not
an
opportunity
for
the
pause
container
to
reap
the
processes
in
in
another
pid
name
space.
So
it
may
it
may
be
possible,
but
I'm
not
aware
of
a
way
for
for
it
to
to
do
so.
So
there's
a
little
bit
of
a
of
a
of
a
bookkeeping
issue
that
we
can
run
into
there.
A
Yeah,
that's
that's
my
understanding
of
it
as
well,
since
a
pod
can
have
multiple
containers
in
it
and
then,
depending
on
the
settings
that
you
have
in
the
pod
specification,
those
containers
may
or
may
not
share
their
pit
name
spaces
with
each
other.
They
may
or
may
not
share
certain
other
things
with
each
other.
A
They
always
share
network
name
space
with
each
other,
since
the
pod
itself
gets
an
ip
because
of
the
fact
that
a
container
is
sort
of
this.
This
wobbly
thing
of
there
are
several
linux
features.
You
know
there
are
several
different
name
spaces.
A
There
are
several
different
c
groups,
and
if
you
have
enough
of
them
separate
from
the
rest
of
the
processes
on
the
system,
then
you
feel
like
a
container,
but
but
that
that
means
that
a
container
is
not
all
that
strictly
defined,
that
during
the
setting
up
of
a
pod
in
order
to
have
the
namespaces
for
that
pod
to
be
able
to
exist
within
the
kernel,
there
has
to
be
some
process,
and
so
then
my
understanding
is
that
always
every
kubernetes
will
have
a
pause
container
running
in
every
pod,
which
then
may
or
may
not
be
visible
from
within
inside
other
containers.
A
In
that
pod,
like
if
you
turn
on
shared
pid
name
space
for
a
pod.
I
believe
that
then
like
when
you
look
in
ps,
you
can
see
the
pause
container
and
you
can
see
the
pause
image.
But
if
you
don't
have
sharepid
namespace
on
then
from
within
your
containers
in
your
pod,
you
don't
ever
get
to
see
it.
But
if
you
look
at
like
cri,
ctl,
ps
or
docker
ps
or
whatever,
the
whatever
the
the
low
level
hey
tell
me
what's
going
on
in
my
container
runtime
command.
E
Yes,
you,
you
don't
see
them
with
christ
itself.
Indeed,
you
only
see
the
pods
running,
but
if
you
try
with
some
like
the
the
clear
of
docker
or
container,
you
will
see
them
differently.
E
I
I
E
F
I
think
it's
hidden
from
the
api
server
too,
so
I
think
it's
since
they're,
both
using
cri,
I
would
meant
I
would.
I
would
think
that
that's
why
it's
hidden
it's
just
so
you
don't
see
a
bunch
of
pods
being
paused
containers
all
the
time.
I
think
it's
more
of
like
a
user
experience
thing
I
would
guess,
but
I
don't
know
but
I'd.
K
Yeah-
and
I
ever
did
this
into
the
chat
as
well,
but
the
reason
for
the
pause
container
is
that
the
network
has
to
exist
before
the
workload
container
starts,
and
so
the
pause
process
acts
as
the
anchor
for
the
for
the
network
name
space.
And
so
that's
the.
The
sole
reason
for
the
existence
of
the
of
the
pause
container
is
to
act
as
an
anchor.
I
E
D
I
K
The
network
namespaces
are
generally
recursive,
so
you
don't
need
forks,
you
do
unless
you
do
on
on
share
or
unclone,
I
think
are
the
two.
Maybe
then
clone's
not
the
right
term
but
unsure
in
but
in
in
this
particular
what
ends
up
happening?
Is
you
spin
up
your
your
pause
container?
Then
you
run
the
cni
with
with
that
network
namespace
that
that's
used
for
there,
so
that
network
namespace
is
separate
from
your
host.
K
So
when
you,
when
you
when
pause,
is
called
it'll
call
unshares
with
an
explicit
net
network
namespace
attached
to
it
that
is
brand
new
to
to
that
specific
process.
And
then
what
happens
is
when
we
unshare
called
the
unshared
system.
Call
for
creating
the
other
containers
we'll
specify
that
it
uses
the
same
network
namespace
as
the
pods
container,
but
it
becomes
a
separate,
pid,
namespace
and
and
other
in
other
name
spaces
that
are
attached
to
it
become
separate
from
each
other
and
so,
whatever
whatever
ends
up
being.
K
A
A
And
because
of
the
fact
that
pause
containers
are
just
always
there,
there's
like
still
some
mystery
to
them
anyway,
so
I
would
I
I
would.
I
would
love
to
see
a
follow-up
thread
and
slack
about
this
because,
like
I
know
that
I
want
to
go
and
and
poke
at
a
kubernetes
cluster
and
verify
the
things
that
I
think
I
remember
are
true.
K
Maybe
there's
a
an
interesting
feature:
suggestion
we
put
into
the
kernel
as
well,
like
maybe
there's
a
way
we
can
say
please,
whatever
this
thing
is
running,
is
a
container,
so
please
have
the
thing
that
reaps
it
be
in
this
other
name
space
instead,
like
me,
I
don't
know
if
it'd
be
mechanically
possible
or
not,
but
basically
having
a
way
to
signal
that
hey.
K
This
thing
is
this:
this
thing
here
is
a
is
not
going
to
reap
its
own
processes,
and
so
please
please
pass
it
up,
but
it
may
not
be
mechanically
possible
because
what
ends
up
happening
is
those
signals
end
up
going
to
the
to
whatever
speed
one
at
the
end
of
the
day,
and
then
that
ends
up
not
passing
it
up.
So
the
colonel
may
not
ever
see
the
fact
that
those
things
are
really
zombies,
so
I'm
not
100
sure.
A
I
don't
know
from
the
from
the
chat
there.
The
question
was
there
was
an
initiative
to
get
rid
of
the
pause
container.
I
I
don't
think
I've
ever
heard
of
that,
but
there
are
many
things
in
the
world
that
I
haven't
heard
of.
A
I
know
that
there
was
recently
a
change
to
the
docker
file
for
the
most
commonly
used
paused
container,
to
run
it
as
like
user
65535,
instead
of
user
0,
which
that
seemed
that
seemed
good
like
it
doesn't
do
anything
it
just
it
just
sits
there
catching
signals,
but
if
it
just
sits
there
catching
signals,
it
doesn't
feel
like
any
good
reason
for
it
to
be
running
as
root
and
so,
and
so
I
was.
I
was
aware
of
that.
Like
d
d
privileging
it
a
little
bit
but.
H
It's
is
the
principle
of
pause,
continuous,
documented
other
in
the
source
code
itself,.
D
Do
we
have
if
you,
if
you
get
that
link,
I
put
a
link
and
that's
my
go
to
every
time.
I
forget
exactly
how
pause
works.
I
go
read
that
again,
which
is
ian
lewis's,
he's
a
long,
deep
dive
on
pause.
There's
I
think,
there's
some
good
links
in
there.
A
D
A
C
B
D
D
A
On
your,
it
depends
on
your
distro,
like
ubuntu.
2204
has
a
systemd
version
in
it
that
wants
to
do
c
groups
v2
by
default,
but
like
earlier
a
bunch
of
versions,
don't
and
so
like.
That
was
how
it
came
up
in
in
my
day,
job
context
was
we
were
talking
about
what
is
needed
to
be
able
to
roll
node
images
onto
ubuntu
2204,
and
it
was
like
well
one
of
the
biggest
one
of
the
biggest
changes
that
were.
We
were
worried
about.
There
was
c
groups
v2,
and
so
it
was
like
you
know.
B
I
K
A
I
mean
for
me
personally,
my
biggest
interest
is
in
like
okay,
so
like
day
job
hat
on.
Just
like
me
as
a
person
who
uses
kubernetes,
you
know
my
my
biggest
interest.
There
is
continuing
to
take
node
os
version
upgrades
without
starting
to
put
configurations
in
to
use
the
old
thing
for
the
you
know
for
the
goal
of
not
having
to
change
what
I'm
doing
like.
That's
that's
my
immediate
interest
in
c
groups.
A
Essentially,
rootless,
rootless
containers
as
a
runtime
behind
kubernetes
is
a
whole
nother
kettle
of
fish
that
some
folks
work
on
from
time
to
time,
but
I
don't
think
that
it's
really
steaming
forward.
Does
anybody
else
have
more
up-to-date
info
on
that
than
I
do.
I
I
I
just
wanted
to
say
one
thing
about
the
secrets
of
v1
v2.
I
think
those
are
just
kind
of
ways
of
talking
to
the
kernel
modules
and
they're,
just
two
different
ones:
cleaned
up.
It's
a
cleanup,
it's
sort
of
like
an
interface
cleanup,
but
it's
not
a
fundamental
difference
under
the
hood
between
the
two
so
beyond
how
you
organize
the
data
and
in
the
pseudo-directors
when,
when
you're
talking
to
the
linux
modules,
so
that's
it.
I
K
K
Let's
say
that
you're,
like
a
user,
and
you
have
no
capabilities
like
linux
capabilities
when
you
call
on
share
and
create
a
new
container
or
a
new
namespace
environment,
that
whatever
is
running
in
that
new
shared
environment
may
have
additional
capabilities,
including
caps,
this
admin.
Now
all
of
these
are
supposed
to
be
name-spaced,
but
the
reality
of
it
is
that
a
large
portion
of
the
of
the
linux
security
model
initially
assumed
that
there
was
no
way
to
regain
those
capabilities.
K
So
the
there's
still
a
lot
of
work
going
on
in
order
to
harden
those
those
paths
and
things
there's
certain
bugs
that
come
around,
where
historically,
you
would
not
have
been
able
to
to
execute
them,
but
the
fact
that
you
can
gain
that
capability
ends
up
opening
the
path
to
perform
the
the
attack,
one
of
the
more
interesting
ones
of
where
that
happened
recently
is
I
posted
the
cv
into
the
chat
2022-0185.
K
Where,
historically,
it
would
have
been
impossible
to
run
this
kind
of
an
attack,
but
the
moment
you
do
the
unshare,
with
your
network
namespace
and
clone
new
user.
You
gain
like
30
some
capabilities,
including
capsis
admin,
cap,
raw
capnet
admin
and
so
on,
which
then
opened
up
a
attack
which
allowed
for
the
cv
to
become
executable.
Even
if
you
had
zero
privileges
to
begin
with.
So
there's
some
interesting
interaction.
So
the
the
con.
K
The
concept
of
rules,
I
think,
is
good
in
the
long
run,
but
there's
a
gap
between
now
versus
because
it's
a
change
in
the
assumption,
so
there's
a
gap
in
when
the
assumption
where
we
have
the
previous
assumption
and
it's
going
to
take
time
to
harden
us
into
into
a
place
where
ruthless
really
works
out.
And
that
was
the
only
reason
I
was
asking
about
c
groups.
V2
was
to
say:
please
be
careful
if
that
was
the
reason
for
bringing
it
in,
but
parody
is
good.
K
But
but
I
would
be
very
careful
with
with
rootless
and
if
you're,
using
secret
sweet
too.
To
also
make
sure
that
I
I
would
recommend
shutting
off
the
ability
for
a
container
to
spin
up
a
subcontainer
if,
as
as
a
as
a
best
practice
as
possible,.
A
Yeah,
there's
there's
a
lot
of
there's
a
lot
of
things
to
consider
in
the
unprivileged
users
can
create
user
name.
Spaces
feature
where,
like
that
feature,
is
absolutely
the
bedrock
of
being
able
to
run
your
container
runtime
as
an
unprivileged
user,
which
then
goes
on
to
have
a
lot
of
other
other
positive
side
effects,
but
also,
if
you
aren't
using
it
on
purpose,
then
it
is
easy
to
forget
that
it
is
opening
up
a
lot
of
possibilities
that
might
not
otherwise
have
existed
and
so
yeah.
A
I
think
it
is
really
good
to
be
thoughtful
about
what
your
workloads
are
and
whether
you
actually
expect
unprivileged
users
to
call
on
share
and
create
new
username
spaces,
and
if
you
do,
then
you
know
apply
good
apply.
Good
supports
for
that.
You
know
you
can
have
things
like
appropriate,
set
count
profiles
for
those
workloads
and
if
your
workloads
are
not
expected
to
do
that,
then
in
in
debian,
like
kernels,
there
is
assist
ctl
for
turning
off
the
ability
for
unprivileged
users
to
spawn
username
spaces.
A
I
believe
that
it
was
the
opinion
of
the
upstream
developers
that
there
was
no
need
to
turn
off
the
ability
for
unprivileged
users
to
create
username
spaces
and
android
and
debian
kernel
developers
thought
that
there
was,
and
so
it
doesn't
matter
for
kubernetes,
but
I
believe
that
android
has
us
the
ctl
for
doing
that
and
that
and
that
that's
the
same
one
that
debian
and
ubuntu
and
those
kinds
of
distros
have.
Does
anybody
know
whether
red
hat
colonels
have
that
cis
ctl
or
not?.
K
A
Think
so,
though,
honestly
this
sounds
like
it
would
be
a
really
good
blog
post
or
a
or
a
good,
maybe
a
good
kubecon
north
america
in
in
detroit
proposal.
For
somebody.
D
A
Understanding
of
it
is
that
in
in
rel,
six
and
rel
seven
unprivileged
username
spaces
were
not
allowed
like
that
kernel
feature
was
not
on
in
their
kernel
config
and
that
in
rel,
8
and
above
it
is
in
their
kernel,
config
and
that
they
did
not
take
the
out
of
tree
patch
for
the
creation
of
the
cis
ctl
that
can
turn
it
back
off.
That
is
what
I
think
I
remember,
but
anyone
who
needs
to
count
on
that
should
check
for
themselves.
D
A
I
mean
I
believe
that
the
I
believe
that
the
the
container
d
and
docker
default
set
comp
profiles
block
on
share,
and
I
was
prevented
from
running
a
kernel
exploit
on
stage
at
kubecon
by
that
setcomp
profile
once,
and
that
was
fun
so,
like
also
yeah
blocking
on
share
there
if
you're
not
using
it,
it's
good
to
turn
things
off.
That's
true,
yeah,
further,
further
thoughts,
there.
A
I'm
gonna
copy
the
I'm
going
to
copy
this
link
to
the
rootless
container
thing
here
into.
I
I'm
involved
with
an
organization
that
doesn't
pay
for
slack
and
it's
not
it's
not,
and
then.
D
Are
still
there
in
slack,
if
you,
if
you
want,
if
you
ever
need
something
you
can
upgrade
temporarily
get
the
thing
you
want.
Slack
still
has
them
I've
done
it
once
and
then
on
upgrade
and
it
wipes
them
again.
So
there's
not
deleting
them.
It's
just
not
telling
you
you
can
have
them
at
least
last
time.
I.
K
A
Yeah
you're
going
to
delete
them
all
right.
I
see
we
have
another
thing
on
here,
which
is
a
suggestion.
Do
we
wish
to
cancel
during
kubecon.
C
B
Stream
that
so
that
the
people
who
can't
make
it
in
person
can
still
come
to
it,
because
I
think
it
would
be
nice
to
be
able
to
be
more
inclusive
for
the
folks
who
aren't
able
to
make
it
in
person.
If
we're
gonna
do
that.
A
Do
you
want
to
bring
that
up
in
in
slack,
then
I
think
we
generally
for
kubecon,
rather
than
just
having
the
meeting
anyway
or
just
canceling.
The
meeting
you
know
put
it
up
to
a
temperature
check
like
do
folks
want
to
have
a
meeting.
It
is
kubecon.
A
lot
of
us
will
be
distracted
by
kubecon
things.
However,
not
everyone
will
be.
Is
there
community
value
in
having
a
meeting.
C
A
All
right-
let's-
let's
do
that
here-
we
are-
we
have
reached
the
end
of
what
we
have
planned
for
ourselves.
This
is
our
space,
so
does
anyone
have
anything
that
they
would
like
to
bring
up
to
the
attention
of
the
group.
K
There's
one
last
thing:
I'll
look
at
the
link
for
it,
but
there
is
a
new
special
publication
put
out
by
mist
that
some
of
you
may
be
interested
in
seeing
probably
not
hugely
important
for
for
this
group
directly
but
will
be
useful
for
others.
It's
about
cyber
security
supply
chain
risk
management
practices,
so
an
area
that
I
know
multiple
people
here
are
are
interested
in,
and
I
think
this
was
published
today
yesterday
or
today.
I
Thank
you,
you
have
the
the
document
number
or
no
there.
It
is
somebody
already
put
it
in
so.
Thank
you.
A
All
right,
I
believe
that
I
believe
that
we
have
come
to
the
end
then
of
what
we
want
to
cover
together.
Thank
you
all
so
much
for
coming.
It
is
a
delight
to
see
everyone
and
look
forward
to
seeing
folks
again
at
another
kubernetes
sig
security
meeting
and
until
then
slack
is
open,
24
7
and
we
can
work
on
things
in
there.