►
From YouTube: Kubernetes SIG Security Tooling 20210720
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right
cool
I'm
seeing
thumbs
ups,
so
let's
maybe
go
through
some
usual
agenda
topics
and
oh,
let's
see
oh
yay
stephen
just
responded.
He
is
joining
in
few
minutes.
He
is
having
some
internet
issues
that
make
sense
all
right
cool.
A
So,
okay,
let's
give
him
some
time
I'll,
go
through
quickly.
The
updates
from
six
security,
tooling
group,
so
we
had
a
pr
open
and
then
some
feedback
I
received
from
our
chairs
was:
let's
see
if
we
can
make
the
triage
process
private.
So
what
we
did
was
basically
created
a
google
group
under
kubernetes
dot
io
and
that
now
will
get
notified
anytime
sneak
scan
detects
a
vulnerability,
and
the
good
thing
is
because
we
don't
want
a
searchable
vulnerability
store.
A
We
are
basically
going
to
be
failing
the
job
when
there
is
non-zero
number
of
vulnerabilities
detected
based
on
the
filtering
that
we
have
applied
and
once
the
job
fails,
we
get
an
alert
and
then
we
run
the
scan
on
locally
on
our
site
and
then
see
what's
going
on
and
then
the
triage
will
happen
privately
and
then,
after
that,
once
the
resolution
is
clear,
we
will
go
and
start
doing
things
like
creating
an
issue
saying
this
is
a
false,
positive
or
creating
a
pr
fixing
it
and
things
like
that.
A
A
A
No
okay,
all
right
so
for
folks
who
have
joined
and
don't
have
access
to
the
meeting
minutes
talk.
A
A
A
A
B
Yeah
I'm
here
I
don't
know
how
much
of
an
update
other
than
we
did
put
together.
The
outline
of
the
self-assessment.
A
Yes,
I
I
saw
that
I
think
that
was
very,
very
good
starting
working
document
and
a
couple
of
updates
I
have
from
my
side
on
that
is:
I
ended
up
like
I
said
that
day
creating
a
template
out
of
your
doc
and
the
talk
was
so
well
written.
I
didn't
have
to
do
much
so
kudos
to
you.
I
have
the
template
linked
in
our
slack
channel
and
the
working
dock.
Sorry
working
dock
for
cluster
api
is
also
linked
to
the
template.
A
A
We
got
one
two
people
from
vmware
who
will
work
on
focusing
on
the
assessment
for
cap
v,
which
is
the
cluster
api
for
vmware,
so
that
will
be
great
and
I'm
hoping
other
cloud
providers
can
also
jump
in
and
share
their
resources
for
assessments.
With
with
that,
I
see,
stephen
has
joined,
hey
steven.
Can
you
hear
us.
A
Okay,
no,
no
worries
we're
just
glad
to
have
you
here.
I
don't
know
if
you
have
screen
share
access
or
your
on
phone.
So
if
not,
I
can
share
screen
and
then
we
can
go
through
the
questions.
Whatever
you
prefer.
C
C
It's
good
for
me.
A
Okay,
cool
all
right,
so
thank
you,
stephen
I'll.
Just
give
I
mean
you
don't
need
an
introduction,
but
I
just
give
some
context
behind.
A
What's
what
this
meeting
is
about,
and
we
probably
still
have
about
half
hour
to
go
so
we
we
should
be
able
to
manage
this
fairly
well
and
cover
most
of
the
agenda.
A
We
did
some
work
with
snake
scanning
thanks
to
many
folks
in
the
call,
and
now
we
want
to
explore
how
we
can
apply
the
same
process
and
learn
from
what
we
did
with
build
time
dependencies
to
images
and
our
kubernetes
kubernetes
repo,
so
stephen
is,
according
to
everyone.
I
talked
in
sick
release,
the
best
person
to
share
insights
on
this,
and
he
was
very
great.
He
was
great.
I
was
very
grateful
that
he
accepted
to
come
over
and
share
his
thoughts.
A
A
Cool
okay,
so
stephen
anything,
you
would
add
in
terms
of
context,
or
should
we
jump
right
in
no,
we
can
jump
right
in.
I
guess:
okay,
cool,
all
right
so
for
me,
and
when
I
was
due
to
the
kubernetes
kubernetes
repo,
I
knew
a
bit
about
the
project,
but
the
repo
itself
and
where
the
code
lies
and
how
many
other
dependencies
of
github
that
we
have.
A
I
had
no
idea-
and
my
main
assumption
at
that
time
was
maybe
kubernetes
doesn't
have
any
images
because
isn't
it
written
in
go
so
you
really
need
binaries
and
then,
when
I
started
diving
deep,
I
realized.
No,
that's
not
true.
Actually,
so,
for
anyone
who
is
new
or
just
starting
like
I
was
how
what's
the
best
way
for
them
to
know
the
current
state
of
the
art
of
any
container
images
in
kk
and
how?
How
does
it
work
and
why
do
we
need
so
many
images
that
we
need.
C
Sure
yeah
so
for
kubernetes
kubernetes.
Actually,
if
you
looked
back
to
let's
say
117,
maybe
a
little
further
back
sig
release
recently
moved
moved
all
of
the
container
images
or
moved
a
lot
of
the
image
building
process
from
the
kubernetes
kubernetes
repo
into
the
kubernetes
release
repo.
So
if
you
ever
want
to
see
a
lot
of
what's
going
on,
I
think
I'm
able
to
share
my
screen
now.
If
you
want
me
to
do
that,
yeah.
C
If
you
hit
all
attendees
are
allowed
to
share,
there's
a
yeah.
C
So
a
great
place
to
start
in
the
kubernetes
kubernetes
repo
is
there's
a
build.
C
Folder
and
one
of
our
most
popular
images
is
actually
cube.
Cross
cube,
crosses,
cube,
cross
used
to
be
built
here,
build
image
and
then
cross,
and
then
there
was
then
there
was
a
build
configuration
here,
but
right
now
it
only
contains
the
version
and
it
lets
you
know
which
cube
cross
version
you're
using
the
cube
cost
version
so
cube
cross.
What
that
image
is
for
is
for
building
kubernetes
but
building
kubernetes
across
multiple
platforms.
C
C
So
it
contains
all
of
the
various
bits
that
we
need
to
actually
build
kubernetes
in
multiple
platforms.
But
the
more
interesting
file
here
today
is
the
dependencies.yaml
file.
So
if
you've
ever
worked
on
updating
dependencies
in
kubernetes
kubernetes,
you
should
have
touched
this
file
in
an
ideal
world.
C
You
should
have
touched
this
file,
so
what
this
file
contains
is
essentially
a
it's
a
manifest
of
dependencies
that
are
tracked
by
a
tool
called
zeitgeist
or
previously
a
tool
that
was
the
it's
essentially
runs
under
the
verify
external
dependencies
script
within
kubernetes
kubernetes,
so
that
script
basically
slurps
this
yaml
file
and
says
like
hey,
is
the
version
that
you
have
set
in
some
reference
path.
C
Does
it
match
the
version
that
we're
expecting
in
this
file
right
so
we're
tracking
the
zeitgeist
version,
so
we're
tracking
the
version
of
the
tool
itself,
but
we're
also
tracking
versions
of
various
images,
the
agnostic
host
image,
its
dependence,
cni
cni,
plugins
version,
core
dns,
so
on
and
so
forth,
right
so
towards
the
bottom?
I
I
listed
out
a
few
things
for
like
going
the
line's
an
interesting
one,
so
we
viewed
the
the
cube
cross
version.
C
But
if
I
was
to
change
one
of
the
versions,
if
I
was
to
change
this
to
1172
or
something
right
and
it
did
not
match
these,
the
pre-submits
for
kubernetes
kubernetes
would
fail
right,
and
this
specifically
it
would
be
the
the
pull
kubernetes
dependencies
resubmit.
So
we
so
we
lock
in
all
these
versions.
Here
you
can
also
see
that
so
we're
tracking
the
go
version.
We're
tracking
the
the
upstream
version,
which
essentially
is
there
because
go
upstream,
does
not
quite
follow
december
center.
C
Notation
or
you
know,
semantic
versioning
in
general,
it
does
ish
but
they're
so
for
go.
Their
minor
versions
are
actually
what
we
would
consider.
So
their
major
versions
are
what
we
would
consider
minor
versions
and
when
they're
dealing
with
when
they're
dealing
with
pre-releases,
you
may
see
something
like
go:
116
rc1
instead
of
go116.0
dash
rc.1
right.
So
we
have.
C
We
have
additional
checks
here,
just
to
say,
like
just
check
the
major
version
right,
just
check
the
major
version
and
and
and
go
about
your
day
right,
we're
also
tracking
the
base
images,
so
gcr
case.gcr.io
debian
base
each
of
these
dependents.
The
debian
base
images
are
basically
used
in
in
a
few
of
the
final,
a
few
of
the
final
images
right
now.
C
It's
only
so
it's
basically
debian
base
to
debian
ip
tables
and
then
the
debian
ip
tables
image
base
images
used
to
build
the
the
cube
proxy
image
right.
Every
other
image
currently
is
using
something
called
go
runner
go.
Runner
is
a
smaller
wrapper
image
based
on
based
on
digitalis
that
is
used
for
the
api
server,
the
controller
manager,
the
scheduler
and
really
anything
that
will
support
it.
C
There
was
an
effort,
a
few
cycles
ago,
to
run
primarily
on
distro-less
images
that
work
has
mostly
been
complete,
but
where
is
the
exciting
stuff?
The
exciting
stuff.
A
C
Yeah,
so
those
are
a
lot
more
likely
to
get
bumped,
and
I
think
you
know
the
part
of
it
is
because
this
trellis
in
general
is
is
pretty
good
at
like
breaking
scanners,
so
the
yeah-
so
mostly
what's
I
mean
what's
tricky
about
it
today-
is
that
there
are
still
a
few
places
where
we
use
the
debian
base.
Image
at
cd
would
be
one
of
the
examples
and
we
primarily
we're
primarily
using
that
debian
base
image
to
build
the
debian
ip
tables
image.
C
The
wnip
tables
image
is
the
one
that
we
care
about
for
for
cube
proxy.
So
if
we
get
something
flagged
and
what
something
flagged
is
really
just
a
few
people
paying
attention
to
a
few
distros,
the
the
security
announce
debian
list
is
one
that
the
release
managers
pay
attention
to.
So
if
we
see
anything
sketchy
in
in
that
list,
we
may
consider
rolling
these
images.
Unfortunately,
there
is
no
automated
mechanism
today,
at
least
to
trigger
a
rebuild
of
these
images.
C
There's
also
a
bit
of
a
requirement
for
some
human
interaction
for
the
for
the
image
promotion
right.
So
a
lot
of
the
exciting
stuff
is
actually
in
kubernetes
release.
C
Within
the
images
folder
and
then
there
are
a
few
different,
a
few
different
directories
that
are
based
on
the
either
the
scope
of
the
image
or
the
staging
repository
that
they're
located
in
right.
So
artifact
promoter
is
one
example,
kate's
cloud
builder,
the
kate's
cloud
builder
image
is
used
to
it's
actually
used
to
build
kubernetes
in
google
cloud
build
right.
C
So
there
are,
you
can
check
out
the
docker
files
they're,
not
particularly
exciting,
but
basically
this
is
built
from
that
cube
cross
image
that
I
was
talking
about
earlier
and
the
reason
it's
done
this
way.
The
reason
we
target
the
cube
cross
version
is
so
that
we
make
sure
that
we're
building
the
same
environment
that
we
would
expect
to
be
building
in
if
we
were
running
pre-submits
on
the
kubernetes
kubernetes
rebound
right.
C
So
nothing
super
exciting
going
on
here.
We're
we're
stripping
python
2,
we're
adding
python
3
we're
making
sure
that
certain
directories
are
in
place
and
this
can
probably
be
removed.
Now
we're
installing
the
docker
cli
and
some
some
debian,
some
additional
debian
repositories.
So
nothing
super
exciting
there.
The
religion
images
are
this
one,
the
ci
one
we
use
in
nci-
and
this
is
a
this-
is
meant
to
be
a
image
that
is
a
little
more
lightweight
than
than
the
cubikins
image.
C
The
cubican's
image
was
used
a
lot
for
various
pre-submit
tests
and
it's
it's
honestly
a
little
bit
too
heavyweight
to
to
use
for
everything.
Sorry,
I'm
just
trying
to
float
the
meeting
controls
away
and
now
I
need
them
hold
on.
C
So
this
is
the
the
ci
image
again.
Nothing
super
exciting
happening
here.
It's
using
the
go
version
that
we
prefer
something
fun
that
we're
doing.
There
is
a
tool
called
image
builder.
It's
not
quite
called
it
yeah.
Let's
call
it
image
builder,
so
directory
for
a
builder
that,
basically
is
sugar.
On
top
of
the
gcb
cloud
g
cloud,
build
submit,
fly,
run
locally
and
all
it
does
is.
It
adds
support
for
various
substitutions.
C
So
we
manipulate
those
substitutions
to
allow
us
to
run
multiple
variants
of
a
build
simultaneously
right.
So
these
variants
here
go
117,
go
116,
go
115.
C
I
will
get
something
that
is
116
dash,
revision,
right,
116,
6,
currently
116
6-1
right
or
it
will
be
the
git
tag
dash
the
config
will
be
go
1,
116
right
or
the.
We
also
have
a
latest
tag.
That
is
not
quite
latest,
but
it
will
be
latest
for
that
specific
variant
right.
So
in
ci
I
can
say
I
know
this
repo
is
running,
go
116,
go
117.!
C
I
want
to
make
sure
that
we're
testing
for
that.
I
don't
necessarily
want
to
specify
the
exact
tag
that
sig
release
last
built
because
we're
essentially
building
this
on
every
commit,
or
so
in
this
repo.
So
this
will
give
you
latest
go
116
right
and
then
you
can
just
use
that
image
to
do
your
to
do
some
of
your
ci
pre-submits
right.
A
One
more
question
on
this,
so
maybe
a
dolphin
is
also
here
with
the
recent
spam
effort
going
on
in
sick
release.
Is
that
going
to
help
us
get
sort
of
a
programmatic
way
to
pull
or
get
a
list
of
all
images
under
kk
or
k
release
with
with
that
work,
when
it's
mature
or
release
for
122.
C
Yeah,
I
think
I
think
it
gives
us
an
opportunity
to
start
introspecting
on
a
few
things,
but
it's
not
going
to
give
us
the
backfill
of
things
that
have
happened
before
right,
so
this
this
will
primarily
be
for
for
production,
artifacts
right
and
very
specifically,
for
the
ones
that
we
release
as
part
of
the
train.
If
you
will
right
the
release
cycle,
so
you
should
be
able
to
check
those
out
in
kubernetes
sig
release.
C
Then,
under
the
release,
engineering
and
artifacts
now
this
is
not
always
perfectly
up
to
date,
but
it
will
give
you
an
idea
of
some
of
the
things
that
you
would
expect
to
come
out
of
a
kubernetes
release,
just
knocking
over
things
on
my
desk,
the
conformance
image,
api
server,
controller
manager,
proxy
scheduler
and
then
a
whole
wide
variety
of
binaries
and
tarballs
to
pick
up
some
config
scripts
for
gce
and
windows,
configurations
and
those
tarballs
that
I
was
talking
about
and
then
lots
more
fun.
C
Stuff,
though
that
are
in
our
in
our
release,
buckets
so,
ideally
in
a
future
state,
we
will
be
able
to
use
these
s-bombs
to
introspect
into
all
of
all
of
the
all
of
the
artifacts
for
the
releases
that
s
bombs
are
available,
for
I
can't
give
any
promises
that
we
will
do
backfill
of
previous
releases.
A
I
can
understand:
oh
one
related
question
here
is
now.
If,
let's
say
today,
it's
really
dependent
on
humans
to
know
there
is
a
new.
There
is
a
new
email
coming
from
security
announced
and
then
based
on
that
prs,
then
get
created
and
the
images
are
pumped
if
we
had
to
automate
where
the
scanners
will
tell
us
versus
we
trying
to
keep
track
of
any
new
vulnerabilities
that
are
fixed.
A
C
Sure
so
that,
ideally,
is
all
here.
C
So
as
part
of
that
digitalis
work,
there
was
a
a
list
of
exceptions
that
was
written
down.
This
page
should
really
change
to
just
describe
the
images
instead,
but
we
wanted
to
flag
images
that
were
images
that
could
not
run
on
distro
list
and
why,
right
so,
the
debian
base
debian
base
image
and
the
reason
that
we
need
it
is
because
we
need
that
iv
tables
images,
as
mentioned
before,
to
support
images
that
require
ip
tables,
the
the
the
top
one
being
wnip
tables
and
cubeproxy
right.
C
The
go
runner
image
is
to
allow
it's.
Basically,
you
know
I've
been
calling
this
like
disre-list
plus
plus
it's
basically
just
giving
us
the
ability
to
to
wrap
exact
calls
and
do
do
some
logging
across
these
images.
That
digitalis
does
not
have
support
for
for
the
cube
api
server,
scheduler
and
actually
the
controller
should
be
listed
here
too.
C
So
non-release
images.
This
is
actually
if
someone
was
interested
in
scanning
through
this
list
and
seeing
whether
or
not
it
was
up
to
date,
that
would
be.
That
would
be
super
helpful.
So
this
is
basically
tracking,
and
this
was
written
by
humans.
There
there's
no
magic
here,
but
at
cd,
some
of
the
the
cleanup
issues,
the
cleanup
directory
images,
fluentd
elasticsearch,
whether
or
not
we
own
it
and
whether
or
not
it's
like
supported
from
a
project
perspective
really
depends
on.
I
think
I
think
supported
can
have
various
definitions
right.
C
These
images
are
used
in
the
the
kubernetes
dns,
the
dns
repo.
So
there
are
a
few
here:
dns
mask
cube:
dns
node,
cache
sidecar,
the
add-on
manager,
which
we're
also
doing
different
things
with
the
add-on
manager
now,
as
well
as
a
node
problem
detector.
So
there
are
quite
a
few
that
use
debian
base
and
there
are
some
opportunities
to
see
if
these
can
be
cut
over
to
distrolus.
C
Now
that
maybe
some
of
those
repos
have
have
changed
since
we've
last
touched
touch
this
list,
and
then
there
are
a
few
non-org
images
that
we
track
that
again
may
not
even
be
in
use
in
the
project.
These
were
just
things
that
we
came
across
and
maybe
there
are
opportunities
to
remove.
C
A
A
D
Okay,
I
guess
I've
got
a
couple.
Yeah
go
for
that,
so
I
guess
first,
I
think,
if
I
remember
right,
debian
base
is
built
from
debian
slim
buster
from
right
from
docker
hub.
What
do
we
know
about
the
kind
of
the
provenance
of
that
image
or
who
builds
it,
and
and
so
on?
D
C
C
C
And
this
is
slated
to
be
the
official
image
from
the
the
debian
project
or
the
official
set
of
images
from
the
debian
project,
so
we
trusted
these
are
some
developers
that
actually
are
in
and
around
the
kubernetes
community
as
well.
C
So
if
you
ever
wanted
to
check
out
how
these
images
are
built
able
to
go
into
this
debian
docker,
debian,
docker,
debian,
artifacts,
buster,
slim
so
on
and
so
forth,
you
can
see
the
breakout
of
each
of
these
images
and
I
think
they
do
have
a
link
to
the
or
maybe
not.
I
believe
there
was
supposed
to
be
a
link
to
the
jenkins
builds
that
they
do.
I
know
that
the
golang
images
have
that
good
cool,
it's
page,
okay,
yeah.
So
it
describes
how
it's
made
here.
D
Got
it
got
it
all
right
thanks
a
lot,
so
I
guess
my
next
question
is
so
I
I
know
I
bother
you
all
the
time
about
w
and
bass
and
getting
debian
base
updated.
Have
we
looked
at
all
into
automating
that
process.
C
Concert
there
has
been
no
concerted
effort
to
look
at
it.
Let's
say
it
like
that,
and
I
think
you
know
that
that
comes
with
that's
really
a
bandwidth
thing,
so
I'm
interested
in
doing
that.
A
Yeah
and
so
for
everyone
in
the
call.
This
is
our
opportunity
to
help
out
if
we
can
help
out
with
this
automation,
we'll
be
solving
a
real
world
problem
and
then
maybe
it
will
be
a
quality
of
life
improvement,
if
not
anything
else,
for
all
the
release,
engineers.
C
Yeah
yeah
and
you
know
going
back
to
there's
also
an
opportunity
to
improve
zeitgeist,
because
in
an
ideal
world,
what
you
could
do
is
use
specify
an
upstream
in
zeitgeist
this
upstream
flavor.
C
You
know
you
could
say
that
it
was
a
container
image
and
link
link
where
you
should
be
getting
the
image
from
so
flavor
container
registry
and
the
version
right
and
there
there
are
a
few
things
that
you
can
do
here
like
specify
that
you
want
a
specify
that
you
want
to
validate
remotely
right
so
validate
remotely
we'll
try
to
phone
home
and
see
if
there
is
a
version
out
there,
as
opposed
to
just
looking
in
that
dependencies.yaml
file
in
an
ideal
world
zeitgeist
would
be
able
to
tell
you
you
would
you'd
be
able
to
fail.
C
A
pre-submit
or
you'd
have
have
like
a
consistent
check.
That's
saying,
like
hey,
there's
a
new
version
over
here.
If
it
follows
like
a
consistent
scheme
that
there's
a
new
version
over
there,
that
you
can
use
so
the
one
one
thing
to
be
aware
of
with
the
the
debian
images.
Really,
every
image
in
general
is
that
we
don't
necessarily
want
to
continually
rebuild
these
images.
C
We
would
prefer
to
trigger
rebuilds
on
changes
to
the
repo
or
changes
to
the
upstream
itself
right.
So
one
example
here,
at
least
for
the
image
building
four
images
that
we
control.
C
C
And
then
you
can
see
that
for
the
for
this
pro
configuration
these
job
configurations,
we
have
a
run
if
changed
on
a
lot
of
these
images
right.
So
this
image
will
only
try
to
build
itself
and
and
fan
out
into
these
different
gcb
runs
for
per
variant.
If
you
have
changed
a
file
under
the
images
build
debian
base
directory-
and
that
is
you
know,
partially
to
just
save
ci
cycles
save
compute
time,
but
really
only
build
these
images
as
necessary,
and
you
know
one
you
know.
C
Another
thing
to
note
is
that
the
the
debian
images
upstream
are
not
built
all
of
the
time
right,
they're
built
on
some
cadence,
so
for
us
to
roll
new
debian
based
versions
on
on
every
change
is
not
super
valuable,
because
if
there's
not
a
new
upstream
version,
then
we're
we're
building
new
versions
off
of
the
same
stale,
yeah.
A
D
A
C
It
depends
is,
is
the
is
the
common
answer.
I
I
think
I
think
it
is
some
of
it
is
done
by
feel
after
doing
this.
For
for
for
a
bit
also
timing
in
the
cycle,
there
are
certain
things
that
we
will
opt
to
not
do
given
certain
points
of
of
the
cycle.
C
So
as
we're
heading
into
you
know,
as
we're
heading
into
the
later
stages
of
the
122
release,
there
is
a
need
to
make
sure
that
we're
not
producing
new
content
in
general
right
so,
but
we
also
know
that
there
is
a
that
there
is
an
update.
C
There
is
an
update
for
go
coming
right,
so
there
is
a
go
117
version
that
is
slated
to
be
out
in
august
and
I
believe,
that's
probably
going
to
be
like
early
to
mid-august
right,
but
that
is
that
is
when
we're
put,
that
is
when
we
are
post
code
freeze
for
for
122
and
may
even
be
post
release
date
right.
So
so
the
timeline
doesn't
line
up
for
us
to
try
to
land
a
go
major
version.
So
that's
that's
the
kind
of
update
that
we
were
like
well.
C
We
won't
do
that
right
now,
because
it
would
be
a
rush
to
it'd,
be
a
rush
and,
and
it
would
also
cause
every
major
every
go
major
version
update
has
the
potential
for
chaos.
I
think,
would
be
a
nice
way
of
putting
it
yeah.
I
can
understand
so
yeah.
So
that's
one
of
the
things
that
we'll
be
very
careful
whether
or
not
we
bump
the
the
in
general.
C
If
it
is
a
security
related
update
for
golang,
we
also
follow
the
the
going
nuts
and
going
announce
lists
the
they
now
do,
pre-announces
for
reasonably
severe
updates
to
go,
and
they
will
do
if
it
is
a
minor
security
update.
It'll
just
come
out
with
the
next
minor
bump.
A
Yeah
yeah
makes
sense.
I
know
we
just
hit
the
time
on
the
meeting
so
want
to
respect
everyone's
schedules,
but
thank
you
stephen
again
for
coming
in
and
sharing
so
much
of
things,
at
least
for
me
that
I
did
not
know.
Hopefully
the
other
participants
feel
the
same.
I'm
sure
there
are
many
more
questions
I
have
few
as
well,
so
I'll
try
to
create
a
slack
thread
on
our
security,
tooling
channel
and
anyone
who
was
either
shy
or
or
run
out
of
time
with
their
questions.