►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome,
welcome
everyone.
Welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
taylor
dolezal
a
senior
developer
advocate
at
hashicorp,
where
I
focus
on
all
things:
infrastructure,
application,
delivery
and
developer
experience.
Every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
A
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
that
will
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
to
all
of
your
fellow
participants
and
presenters
with
that.
I'd
love
to
hand
it
over
to
peter
and
martin
to
kick
off
today's
presentation.
B
Well,
hello,
thanks
very
much
for
inviting
us
on
I'm
martin
wimpress.
I
work
at
slim,
ai,
I'm,
a
senior
developer
advocate
and
community
manager
and
I'm
joined
by
my
colleague,
peter
hi,
pete.
C
B
Yeah,
so
I
think
if
we
just
sort
of
dive
into
the
introduction
of
what
we're
going
to
be
doing
today,
so
as
highlighted
in
a
recent
cncf
software
supply
chain,
best
practices,
white
paper
tools,
like
docker
slim,
were
used
to
limit
the
number
of
files
in
container
images
and
thus
reducing
the
attack
surface.
B
So,
while
our
talk
today
applies
to
any
oci
compliant
container
paradigm,
we'll
be
focusing
on
docker
images,
mostly
because
they're
familiar
and
most
prevalent
out
there,
but
what
we're
specifically
going
to
cover
is
dockerfile
best
practice.
Just
at
a
surface
level.
We
can
dive
into
that
in
more
detail.
B
On
another
occasion
we
have
done
things
in
that
we've
done
live
streams
and
and
blog
posts
and
stuff
on
this
in
the
past,
we'll
be
using
docker
slim
to
analyze
the
container
layer,
construction,
we'll
be
doing
some
security
scanning
of
container
images,
generating
a
software
bill
of
materials
for
the
container
images
and
then
we'll
be
using
docker
slim
to
minify
the
container
and
analyze.
What's
changed,
making
comparisons
to
the
analysis
we've
done
with
security
and
s-bomb
earlier
and
we'll
be
doing
that
used
by
exploring
and
diffing
container
images.
B
So
that's
what
we've
got
on
the
on
the
ticket?
Is
there
anything
you
want
to
add
to
that
pete.
C
No,
I
think
that's,
I
think,
that's
really
good.
I
think
you
know
what
what
people
will
see.
You
know,
I
think
one
of
the
you
know.
Advantages
of
this
is
that
you
know
you
can
run
just
the
code
that
you
need
in
production,
and
you
know
we
have
some
highlights
of
potential.
C
You
know
security
vulnerabilities
that
you
might
see
in
kubernetes.
You
know
examples
of
great
talks.
People
have
done
in
the
past
on
that,
and
also
some
of
the
drawbacks
of
of
slimming
right.
No
great
technology
is
without
trade-offs
and
so
we'll
be
highlighting
some
of
those
things
as
well
so
excited
to
to
take
it
away.
Martin.
B
Okay,
right
so
I'll,
just
I'll
be
in
the
terminal
for
much
of
this,
but
towards
the
end,
we'll
we'll
be
moving
to
a
pretty
web
app.
So
there
we
go.
That's
my
terminal,
so
we're
gonna
start
with
the
dockerfile.
B
So
we're
just
going
to
use
this,
so
we
can
see
some
syntax
highlighting.
So
this
is
a
pretty
good
start
from
a
sort
of
best
practices.
Point
of
view.
So
I'm
just
going
to
talk
through
some
of
the
things
that
exist
in
this
docker
file
that
are
good
things
to
do.
First
of
all,
we've
specifically
versioned
the
base
image
that
we're
we're
pulling
from,
and
we've
even
chosen
to
use
a
slim
base
image
to
keep
the
size
of
things
down.
B
B
We
also
use
the
changing
ownership
process
in
that
copy
command
and
we
only
specifically
pull
in
the
assets
that
we
require
so
that
we
don't
accidentally
leak
any
tokens
or
secrets
that
might
be
in
you
know
the
app
directory
and
then
we're
using
some
best
practice
here.
The
pip
install
is
not
caching
any
data
inside
the
container
during
the
run
command,
and
then
we've
just
got
some
standard
boilerplate
here.
B
This
image
is
based
on
debian
and
although
we're
not
doing
any
apt
operations,
that's
a
bit
of
boilerplate
that
I
always
have
at
the
end
of
those
grips
to
clean
up
and
make
sure
we're
not
leaving
any
craft
behind
then
we're
choosing
to
run
our
app
using
a
non-privileged
user,
we're
exposing
the
port
that
the
app
works
on,
and
that's
also
useful
for
the
sort
of
discovery
process
that
docker
slim
does
and
then
we've
defined
a
tight
entry
point
to
our
application
as
well.
B
So
these
are
all
good
things
and
there's
a
couple
of
links
at
the
top
of
that.
If
you
can
see
them
too,
you
know
a
blog
post
that
I
published
earlier
in
the
year,
which
covers
this.
In
a
bit
more
detail,
and
also
because
this
is
a
python
app-
a
link
to
an
article
from
python
speed
where
they
specialize
in
python
inside
containers
so
pete
anything.
You
want
to
dig
into
on
that
one.
C
No,
I
guess
just
a
note
for
the
audience
that
this
is
a
flask
app,
which
is
you
know,
one
that
we've
done
on
our
our
twitch
stream
in
the
past
so
might
be
familiar.
I
see
yannick
is
in
chat,
so
hello,
french
guy,
and
you
know
you-
you
may
have
seen
this
example
before,
but
it's
a
pretty
simple
basic
app,
but
for
us
the
app
isn't
super
interesting
in
that
we
can
do
this
with
a
node
container.
C
We
could
do
this
with
with
several
different
containers,
and
you
know
we're
just
using
this
as
a
very
basic
example,
because
a
lot
of
people
are
familiar
with
the
framework,
so
yeah.
B
Yeah,
it's
just
got
two
end
points
root
and
hello,
and
that's
all
it
does
now.
The
app
like
peter
says
the
app
isn't
important,
but
the
two
entry
points
will
be
we'll
use
those
as
examples
a
bit
later
on.
So
that's
the
app
that's
the
dockerfile.
I
suppose
the
other
thing
to
point
out
is
a
lot
of
people
sort
of
advocate
for
starting
with
things
like
alpine.
If
you
want
to
make
slim
minimized
containers
we've
chosen
not
to
do
that
here.
B
There
could
be
different
reasons
for
doing
that.
If
you
are
a
developer,
that's
specifically
knowledgeable
around
ubuntu
or
debian.
You
may
choose
to
use
a
base
image
that
you're
familiar
with
to
help
with
you
know,
develop
a
momentum
or
it
may
be
that
you
are
aware
that
using
something
like
alpine
could
actually
introduce
some
unexpected
behavior
in
your
application
or
your
language
ecosystem,
which
is
actually
something
that
can
happen
more
often
than
not
with
python
apps.
So
we
are
using
not
using
alpine
on
this
occasion.
B
So
let's
have
a
look.
What
have
we
got
here
now?
Well,
one
of
the
things
I
can
do
is
we
can
lint
our
docker
file
with
docker
slim.
So,
let's
just
lint
that
docker
file
we'll
get
some
output
there
and
it
will
generate
a
report
file
for
us
which
is
just
here.
So
if
I
cat
the
report
file,
we
can
see
that
it's,
it's
analyzed,
24
things
that
we
haven't
fallen
foul
of,
but
we
have
actually
got
one
finding
and
that's
that
we
have
no
docker
ignore
files.
B
C
Somebody,
I
think
somebody
in
the
chat
already
called
us
out
so
one
thing-
and
I
know
that
we
we
mentioned
this
in
the
blog
post-
that
you
talked
about
so
exegete
io
points
out
that
we
should
copy
app
in
after
the
pip
install
in
the
docker
file,
which
is
a
an
improvement
for
the
the
layer.
Caching
right
so
as
we're
making
changes
to
our
application,
those
are
going
to
be
more
frequent,
we're
not
going
to
change
the
underlying
dependencies
as
much.
C
I
know
we
talked
about
that
in
the
blog
post,
so
good
catch
and
slight
improvement
there
as
well.
So.
B
Yeah
right,
then,
so
with
that,
let's
move
on
to
the
the
very
basics
we'll
we'll
build
our
docker
image
here.
So,
let's
think
now
we'll
call
it
prod
fat.
So
this
is
our
fat
container.
So
there's
nothing
here
that
anyone
wouldn't
expect
to
see
so
docker
images
and
there
we
have
so
our
base
base
image
is
122
megabytes
and
by
adding
our
app
and
dependencies.
We've
now
got
a
container.
That's
133
megabytes.
So
that's
not
too
shabby,
and
I
will
run
my
app
very
quickly.
B
B
I
am,
as
you
can
now
see
poking
at
my
application
and
I
can
also
hit
the
other
url,
which
is
this
one.
So
there
we
go,
that's
the
entirety
of
of
what
that
app
does
and
there
it
is
running
which
comes
as
no
surprise
now.
What
we're
going
to
do
is
we're
going
to
simulate
something
that
we
have.
We
see
quite
regularly
and
that's
container
images
that
make
their
way
into
production
that
have
still
got
development
tooling
inside
them.
So
we're
going
to
sort
of
create
a
simulation
of
that.
B
B
B
So
we
will
build
that
dev
container
for
the
purposes
of
comparison
and
we'll
call
that
dev
dash
fat
on
the
tag
there
so
we'll
build
that
we
don't
need
to
run
it,
but
what
we
will
do
is
we
will
have
a
look
at
what
that's
done
to
the
size
of
the
container.
As
you
can
see,
it's
doing
quite
a
bit
more.
B
Oh
goodness,
I
can't
type
a
toffee
today
right
there
we
go
so
now
that
dev
container
is
271
megabytes
in
size.
So
that's
a
container
we'll
be
using
for
some
comparison
processes
a
little
bit
later
on
now.
What
we're
going
to
quickly
go
through
here
is
scanning
docker
containers
or
containers
generally
for
security
vulnerabilities
and
also
to
generate
s-bom.
Now
we
could
use
a
docker
scan
to
do
that.
B
B
So
here's
our
our
vulnerability
list.
It
tells
us
that
114
packages
have
been
cataloged
and
there
are
67
vulnerabilities.
Most
of
these
are
low
or
negligible
with
some
you
know,
mediums
and
criticals
hiding
away
in
there.
What
have
we
got
here?
This
looks
like
an
interesting
one:
lib
g
crypt,
lib,
gnu,
tls,
okay,
so
there's
a
bunch
of
cves
in
here
different
statuses.
B
Now
you
should
run
these
types
of
scans
and
analysis
before
you
slim
a
container,
so
you
get
an
overall
picture
of
what
was
used
to
construct
that
container.
B
The
slimming
process
removes
some
metadata
from
the
container
and
that
limits
the
effectiveness
of
these
scanning
tools.
So
we
do
the
scanning
now
on
the
on
the
full
fat
containers
and
then
we'll
start
well
we're
not
going
to
do
that,
but
you
should
store
and
publish
this
metadata
alongside
those
images,
so
you've
got
a
record
of
what
was
used
to
compose
them.
So
that's
our
security
scan
done
there.
Let's
just
take
a
look
and
see
what
that
that
that
looks
like
if
we
do
the
same
on
our
dev
container.
B
So
what
have
we
got
here?
185
packages,
121
abilities,
so
that's
a
another
71
packages
included
in
that
image
and
nearly
double
the
number
of
vulnerabilities.
So
just
by
including
some
dev
tools,
we've
increased
the
potential
attack
surface
inside
our
our
container
pearl
is
installed
now.
Yes,
so
we
can
do
the
same
thing
with
trivi,
which
is
another
vulnerability
scanning
process.
That's
gonna
tell
us
much
the
same
sort
of
thing.
B
If
we
just
look
at
the
sort
of
edited
highlights
here-
oops,
yes,
that's
correct,
so
we
can
see
the
package
here.
We
we
can
see,
there
was
65
59
low
to
medium
and
so
on,
and
we
can
do
the
same
for
our
dev
container.
B
And
you
know,
unsurprisingly,
it
finds
more
stuff
as
well,
so
you
can
use
those
two
tools
to
you
know,
get
cve
analysis
from
your
containers
and
then
the
other
thing
we're
going
to
do
is
we're
going
to
use
a
tool
called
sift,
which
is
a
tool
and
library
for
generating
software
bill
of
materials
for
container
images
and
file
systems.
So
this
is
another
open
source
project
from
the
same
camp
as.
B
Gripe
so
let's
take
a
look
at
this:
we'll
do
the
production
container
and
this
will
build
us
a
nice
big
list,
and
you
can
see
here
the
difference
between
those
things
that
are
debs
that
were
installed
as
part
of
the
distro
versus
those
things
that
are
python
packages
that
were
pulled
in
via
pip
when
we
constructed
the
container
in
the
dockerfile.
So
that's
looked
at
114
packages
and
this
tells
us
the
packages
that
are
included
in
the
version.
B
So
this
is
in
a
really
easy
way
to
generate
that,
and
then
we
can
do
the
same
thing
against
the
dev
container
and
you
know
to
the
surprise
of
no
one:
there
is
more
stuff
in
it.
B
One
of
the
interesting
things
in
here
is
as
a
result
of
installing
those
dev
tools.
We
now
have
the
open,
ssh
client
installed.
B
So
what
we've
done
is
we've
put
a
very
interesting
tool
into
our
container
image
that
any
would-be
hijacker
would
be
delighted
to
find
you
know
inside
that
container
for
using
for
island
hopping
or
things
of
that
nature.
You
know,
we've
we've
seen
this
recently.
There
was
a
talk
at
which
event
was
it?
Pete
was
it
cloud
cloud
days,
it
was
container.
B
Yeah-
and
it
was
a
slightly
sort
of
staged
example,
but
it
made
a
very
good
point
about
where
defaults
changed
and
the
behavior
of
a
kubernetes
cluster
is
altered,
and
then
you
have
an
application
that
has
a
bug
in
it
effectively.
B
It
was
unsanitized
user
input
and
from
that
they
were
able
to
create
a
reverse
shell,
because
inside
the
container
image
in
the
cluster
there
were
shells
there
was
curl
and
just
enough
tooling,
to
create
reverse
shells
and
provide
the
tooling
to
somebody
to
actually
poke
at
the
apis
of
kubernetes
and
then
disrupt
its
operation.
B
So
what
we're
going
to
be
looking
at
in
just
a
bit
is
where
we
minify
the
container
is
removing
that
unnecessary,
tooling.
That
exists
inside
the
container
images
to
reduce
your
tax
surfaces.
So
in
that
same
situation,
yes,
your
application
may
still
have
that
bug
inside
it,
which
means
that
you
know
somebody
can.
You
know
overflow
the
the
the
application,
but
then
none
of
the
tooling
exists
in
the
container
that
they
can
actually
island
hop
or
go
any
further
with
that
exploit.
So
that's
that's
the
the
benefit
here.
C
Yeah,
martin
we've
got
a
couple
questions
from
the
chat,
so
it
should
take
them
in
in
a
couple
orders.
So
just
as
a
reminder,
so
anchoring
or
sorry
as
you
mentioned
gripe
and
sift,
you
know
they're
supported
by
the
the
company
encore.
So
you
know
they're
great,
open
source
tools.
We
have
a
question
about
open
source
projects
for
beginners
who
are
still
learning,
so
I
think
if
you're
just
getting
started
with
containers,
you
know
kind
of
creating
containers.
C
You
know
that
have
a
simple
application
in
them.
It's
just
a
great
way
to
get
started.
If
you
do
the
sort
of
docker
hello
world
type
of
examples
in
whatever
programming
language
that
you
use-
or
you
can
actually
just
take
any
hello
world
tutorial
so
right
now,
I'm
learning
rust
right.
There
is
a
great
rust
container
out
there.
I
don't
really
know
how
to
use
rust.
So
I
have
a
hello
world
app.
I
can
containerize
that
in
a
pretty
simple
docker
file,
and
then
I
can
run
some
of
these
tools
on
them.
C
I
can
use
docker
slim
to
try
slimming
that
container.
I
can
use
these
open
source
tools
that
we
just
showed
from
the
encore
company.
You
know
sift
and
gripe
to
generate
an
s-bom
and
do
a
vulnerability
scan.
I
could
use
the
docker
scan
vulnerability,
scanner
to
see
sort
of
what's
inside
them
and
what
might
be
vulnerable
there
at
slim,
ai.
We
have
a
free
web
platform
that
you
can
actually
pull
that
container.
C
You
can
look
at
just
the
rust
container
and
see
what's
inside
of
that
or
you
can
actually
pull
your
own
application
from
docker
hub
or
amazon
ecr.
So,
if
you're
new-
and
you
want
to
get
started
with
some
of
this
stuff
like
just
creating
a
very
simple
hello,
world
container
is
a
great
way
to
get
started
and
do
that
in
the
application
that
you're
the
application
language
that
you're
most
familiar
with.
So
are
you
rusty?
C
If
you
are
more
or
less
familiar
with
the
rust
language,
I
believe
you
are,
I
believe
it's
a
rust
station
is,
is
what
they
let
you
they
give
you
that
tag
as
soon
as
you
do.
The
hello
world
example,
which
I
finished
this
morning
so
yeah
we're
all
rusty.
Someone
else
is
asking
what
exactly
happens
during
the
slimming
process.
I
think
we'll
get
into
that
right.
Martin
you're
going
to
show
docker
slimming,
so
that
might
be
a
good
segue
into
the
next
thing.
So
so
great
questions,
please
keep
the
questions
coming
so
indeed,.
B
So
I
just
need
to
pick
up
my
terminal
again.
There
we
go.
B
There
we
go
right
then,
so
if
we,
what
we're
going
to
do
now
is
we're
going
to
use
docker
slim.
B
It
has
a
feature
called
x-ray,
we're
going
to
use
that
to
sort
of
generate
some
analysis
on
the
layer,
construction
and
we're
going
to
take
a
little
look
at
that
so
docker
slim
and
then
x-ray
we're
not
going
to
export
all
the
artifacts
just
yet
we're
going
to
scan
our
production
container
and
we're
just
going
to
output
all
of
that
into
a
text
file
so
that
we
can
go
and
look
at
it,
and
I
don't
have
visual
studio
code,
so
it
will
have
to
be
the
ever
wonderful
nano.
B
So,
as
you
can
see,
this
is
quite
verbose
output,
but
there
are
some
things
if
you,
if
you
look
through
here,
you
can
start
to
find
some
interesting
strings.
So
one
such
string
is
this
one
which
is
info,
equals
layer
dot
start.
So
if
we
look
through
the
file,
we
can
see
the
start
of
each
layer
in
the
construction
here.
So
we
can
see
details
about
how
each
of
the
layers
was
put
together.
B
So
we
can
see
the
instructions
that
were
run
here
and
the
object
count
and
things
of
that
nature
the
size
of
things
that
were
added.
You
know
this
one's
particularly
interesting,
there's
a
lot
that
went
on
here,
so
we
can
see
inside
here
and
see
quite
a
lot
of
information,
and
we
can
also
you
know,
do
things
like
find
the
exposed
ports?
You
know
so
the
information's
in
here
now.
I
do
realize
that
this
is
a
not
particularly
human,
friendly
way
to
sort
of
visualize
that
data.
B
So
we'll
do
we'll
run
that
again
here
and
what
that
has
created
is
this
file
here,
data
artifacts.tar
and
what
you
can
now
do
is
you
can
go
to
portal.slim.dev
and
you
can
upload
that
tar
file
and
it
will
generate
you
a
technicolor
web
view
of
that
analysis,
and
it
turns
all
of
that
sort
of
machine,
readable
data
into
consumable
information.
Now
I'm
not
going
to
go
to
the
website
and
do
that
just
now,
because
the
process
of
switching
between
web
and
terminal
is
a
bit
tricky
for
me
today.
B
So
what
we'll
do
is
we'll
save
all
of
the
visibility
stuff
with
the
web
app
until
a
bit
later
and
we'll
look
at
we'll
look
at
all
of
the
container
exploration,
analysis
and
diffing
features
all
together
a
bit
later
on,
and
hopefully
that
will
help
answer
some
of
the
questions
about
what
exactly
happens
when
a
container
is
slimmed.
B
C
That's
quite
the
the
question
in
the
point
that
we
got
before
from
x,
execute
io,
who
pointed
out
that
you
know
the
layer.
Construction
would
actually
be
a
little
better
in
the
docker
file.
If
you
know
we
installed
the
dependencies
before
we
copied
the
app
over,
you
know,
that's
a
good
place
to
see
that
in
this
sort
of
layer,
construction
from
the
docker
slim
x-ray
export.
Also,
you
can
see
it
in
the
in
the
web
platform.
It's
slim
ai,
so
yeah.
B
On
that
looks
like
a
good
command
of
linux,
it
helps
a
lot,
not
necessarily
I
mean
I'm
comfortable.
You
know
in
a
terminal,
but
I
mean
the
docker
commands
and
the
docker
slim
commands.
I
think
docker
slim
only
has
four
main
primitives
that
you
need
to
learn.
It's
build
lint,
x-ray
and
profile.
You
know
so
you
know
it.
B
B
There's
there's
enough
complexity,
but
enough
simplicity
in
a
few
of
the
basic
primitives
for
docker
and
docker
slim.
So
we're
now
going
to
take
a
look
at
slimming.
You
know
the
container.
So
there's
a
couple
of
things
to
point
out
here.
A
lot
of
people
have
sort
of
been
treating
container
sizes
like
a
vanity
metric,
and
that's
really
not
the
case.
So
the
size
of
a
container
can
be
used
as
an
indicator
as
to
the
quality,
the
quality
of
that
container
and
how
well
maintained
it.
B
Is
it's
not
the
be
all
and
end
all,
but
it's
an
indicator
of
container
quality,
and
that's
certainly
something
that
we're
looking
at
working
on
in
sort
of
trying
to
take
some
of
these
indicative
metrics
around
containers
and
turn
that
into
sort
of
a
health
report.
You
know
using
more
than
just
container
size,
but
also
that
security
scan
the
software
bill
of
materials
and
and
various
other
things.
B
So
slim
will
slim
the
container
and
we'll
we'll.
Do
this
I'll
explain
what
each
of
these
parameters
arguments
do
so
we're
calling
docker
slim
we're
asking
it
to
build
a
new
container,
we're
going
to
call
this
container
slim
demo
with
the
tag
prod
slim
and
it's
going
to
be
using
our
prod
fat
container
as
the
source.
B
B
It
runs
the
container
and
the
observer
analyzes
everything
that
that
container
hit
touched
or
used
in
order
for
that
application
to
run,
and
it's
very
much
worth
pointing
out
that
this
is
a
very
key
bit
of
information
up
here.
So
the
http
probe
commands
there
is
a
count
of
one.
It
did
precisely
one
thing:
it
did
a
single
get
request
on
the
root
of
the
application.
B
And
I
understand
that
is
probably
not
sufficient
for
most
people
and
we're
going
to
get
into
some
of
the
other
things
that
you
can
do,
because
I
appreciate
our
application
is
probably
not
as
interesting
or
complex
as
yours,
but
we'll
show
you
how
to
build
out
how
that
probe
works.
So
the
other
interesting
metric
here
is.
It
has
minified
our
container
by
nearly
six
times
the
original
was
133
megabytes
and
our
optimized
container
is
now
just
23..
B
So
let's
just
confirm
that
with
docker
images-
and
indeed
here
is
our
prod
slim
container
23
megabytes
versus
133,
so
we
should
probably
run
that
and
run
the
prod
slim
container
there.
It
is
there's
you
can
see
the
terminal
output,
I'm
just
heading
over
here,
I'm
now
going
to
poke
up
my
there
we
go,
you
can
see
me
hitting
the
root
of
the
app
and
then
I'm
going
to
hello
world
there.
We
go.
C
One
one
thing
to
point
out
as
well
is
that
you
used
the
python
slim
container
as
the
base
image,
which
is
again
the
sort
of
minimum
container
from
the
python
community.
That
has
nothing
to
do
with
slim
or
docker
slim.
That's
the
python
community's
image,
that's
kind
of
the
bare
minimum
of
python
tools
that
you
would
have.
So
that's
why
that's
like
120
megabytes
and
then
we're
reducing
that
down
into
25
megabytes.
C
If
you
were
to
use
python
latest
as
your
base
image,
which
is
you
know,
you
know
kind
of
that,
the
most
generic
one.
You
know
that
might
be
a
gigabyte
and
you
would
still
sort
of
be
able
to
slim
down
to
this.
This
type
of
magnitude
so.
B
Yeah,
so
we've
just
talked
there
about
it
did
a
single
thing:
it
will
do
a
get
request
on
the
root
of
the
app
and
then
it
will
attempt
to
crawl
any
urls
it
finds
as
a
result
of
that
process,
which,
of
course,
for
a
restful
api.
It
may
or
may
not
find
anything
at
that
point.
So
we're
going
to
look
at
a
couple
of
additional
commands.
We
can
use
there's
one
which
is
called.
I
will
just
put
this
here.
It's
the
http
probe
command.
B
So
what
we
can
do
is
if
you've
got
a
simple
app
like
mine.
You
can
add
additional
probes
to
that
set
of
http
probes.
So
here
is
a
I'll
just
call
this
probe
one
for
the
for
the
purposes
of
this
oops
I'll
put
that
in
the
wrong
place
here.
B
Let's
just
call
this
p1,
that's
easier,
so
we're
adding
this
here.
So
my
app
has
another
end
point
under
slash
hello.
So
what
this
is
doing
is
adding
this
additional
end
point
to
the
probe.
So
if
we
run
this
process,
we're
slimming
that
same
container,
but
this
time
we
will
see
that,
in
fact,
two
probes
ran
as
part
of
the
discovery
process.
Here
we
can
see
that
the
count
was
two
and
it
was
get
on
hello
get
on
the
route.
B
There
was
a
count
and
we
can
see
that
they
were
both
successful
now.
This
is
important
if
I,
if
I
just
did
the
probe
on
the
root
of
the
app
there
is
a
possibility
that
that
doesn't
exercise
all
of
the
code
paths
and
pull
in
all
of
the
dependencies
of
the
application
and
therefore
the
resulting
container
won't
be
fully
functional,
but
by
making
sure
I've
hit
all
of
the
end
points.
B
I've
exercised
all
of
those
code
paths
and
made
sure
that
any
dependencies
for
the
whole
application
get
taken
into
account
in
the
minified
container
that
it
creates.
But
I
appreciate
go
on.
C
I
was
just
gonna,
say
kind
of
getting
back
to
avinash's
question
of
like
what
happens
during
slimming.
I
think
that's
what
you're
kind
of
explaining
right
now,
which
is
the
you
know,
the
container,
runs
and
and
again
I
think
we
should
say
that
there
are
a
lot
of
different
approaches
to
slimming
and
we're
showing
sort
of
the
docker
slim
approach
to
slimming,
which
is
to
try
to
automate
it
and
really
understand
everything.
That's
in
the
container,
I
think,
slimming
at
a
high
level
is
this
notion
that
you
should
only
ship
to
production.
C
C
The
way
it
works
is
to
run
the
application,
stimulate
that
application
in
certain
ways
so
that
it
can
see
everything,
that's
running
and
it's
a
mixture
of
static
and
dynamic
analysis
and
then
what
it's
going
to
do
is
rebuild
a
functionally
equivalent
version
of
the
container,
which
is
what
martin
is
just
doing
right
now
and
with
these
flags
that
martin's,
showing
the
they're
just
different
ways
to
sort
of
better
run
the
container,
so
that
docker
slim
can
be
smarter
about
what
it
keeps
and
what
it
can.
What
it
can
take
out
right.
B
And
if
you've
got
a
a
more
complex
application
than
this-
and
you
have
many
endpoints
and
maybe
it
responds
to
more
than
just
get
requests,
then
you
can
group
together
lots
of
different
http
probes
in
a
single
file.
So
I
will
show
you
the
very
simple
one
I
have
in
fact,
let's,
let's
use
this
for
colorization,
so
I
have
a
file
here:
probe,
dot,
json
and
it
has
a
couple
of
commands
in
here.
B
B
C
Here,
bernards
bernard
asks
and
we
get
this
question
a
ton,
so
I
think
it's
good
to
address
you
know.
Can
you
get
into
a
situation
where
you
execute
all
the
endpoints?
But
you
know,
to
paraphrase
a
little
bit
like:
is
it
possible
that
you
miss
something
you
know
and
I
I
think
at
certain
stages,
it's
a
little
more
art
than
than
science.
You
know
you
certainly
don't
need
to
create
a
new
probe
for
every
single
endpoint,
with
every
single
variable
across
your
entire
app.
C
But
a
common
thing
for
people
who
are
just
brand
new
to
docker
slim
is
they
just
do
kind
of
docker
slim
build
and
they
they
do
that
on
their
container
and
suddenly
it
stops
working
and
that's
what
all
of
these
flags?
And
if
you
go
to
the
docker
slim
github
repo,
you
can
find
a
full
list
of
you
know.
I
don't
know
there
might
be
300
different
flags
that
you
know
the
the
contributors
have
built
into
the
project
over
time
that
really
help
you
sort
of
tune.
What
that
minification
recipe
looks
like.
C
So
if
your
app
is
really
really
complicated,
you
might
need
more
flags
if
it's
really
really
simple,
it
might
work
better
out
of
the
box,
but
the
goal
of
docker
slim
is
to
be
able
to
really
minify
any
app.
It's
just
that
your
mileage
may
vary
depending
on
how
complex
your
app
is
and
at
what
points
you
know,
different
code
gets
executed,
so
yeah.
B
And
you
know
something
else
that
we're
working
on
is
to
take
that
rich
set
of
tooling
that
exists
within
docker
slim
hello.
It's
stopped
screen
sharing
uh-oh.
Let
me
try
that
again.
B
Oh,
this
is
difficult.
Oh
okay,
it
doesn't
want
to
share
my
oh
there
we
go
there.
We
go
had
a
good
hard
thing
about
it.
B
Is
that
there
is
that
rich
palette
of
of
tooling
in
order
to
craft?
You
know
a
docker
slim
command
that
will
that
will
work
for
your
application,
but
we're
trying
to
simplify
that
and
turn
that
into
a
much
simpler
process
for
actually
building
out.
That
means
by
which
you
can
probe
your
your
container.
B
So
I
I
have
this
here,
we're
doing
another
build
and
we
this
time
we're
going
to
use
that
json
probe.json
there
to
to
build
it
out,
and
we
will
see
that
it
will
run
run
that
in
order
to
execute
those
probes.
B
And
what
we'll
be
looking
at
a
little
bit
later
is
the
impact
of
this
minification
process.
B
B
So
I'm
building
the
a
slim
container
from
our
dev
docker
file,
which
included
those
additional
dev
tools
and
I've
also
included
the
extra
http
probe
command
there,
just
for
sort
of
completeness
and
if
we
do
docker
images
here,
you'll
notice
that
our
broad
slim
and
devslim
containers
are
exactly
the
same
size
now.
The
reason
why
this
is
important
is
because
one
of
the
advantages
well
yeah.
B
Let's
say
based
around
debian
in
this
case,
and
they
want
to
create
smaller
containers
because
they
want
to
reduce
the
attack
surface
and
all
of
those
other
good
things.
They
want
to
increase
the
type
that
will
decrease
the
time
it
takes
to
deploy
into
production
and
smaller
containers
are
faster
to
deploy
faster
to
start,
and
somebody
suggests.
B
Well,
you
could
start
with
alpine,
but
then
you
have
to
relearn
a
whole
bunch
of
new
platforms
and
tools
and
developers
are
already
universally
time
poor
and
asking
them
to
learn
another
another
thing
and
another
thing
another
thing
and
change
the
way
that
they
work
is
harder
than
introducing
something
that
just
augments
what
they're
already
doing.
So.
B
But
then
you
can
put
those
containers,
through
your
ci
cd
platform
through
docker
slim,
to
create
these
small
containers
that
have
got
all
of
that
tooling
removed,
so
that
they
can
then
be
pushed
into
production
with
you
know
that
reduced
attack
surface
and
those
efficiency
benefits.
B
C
No,
I
think,
that's,
I
think,
that's
a
great
point
and
and
really
illustrates
what
we've
been
talking
about.
We
got
a
question.
Can
docker
slim
work
directly
with
tar
image
files?
We
use
basil
to
build
images
resulting
in
file
artifacts
rather
than
having
images
in
our
local
docker
registry.
B
C
Yeah
one
caveat
question
we
get
a
lot
is:
does
it
require
docker
at
all,
and
the
answer
to
that
is
yes,
so
docker
slim
is
a
binary
that
you
download
and
it's
on
your
your
local
machine
or
you
can
put
it
on
whatever
machines
like
you
know,
a
lot
of
people
build
it
into
their
ci
pipelines
and
stuff
like
that.
It
can
run
as
a
container
as
well,
but
it
does
rely
on
the
docker
daemon
to
do
its
thing
and
run
the
image
and
understand
it
now.
C
What
you
get
is
an
oci
compliant
image,
so
you
run
that
on
whatever
you
want,
but
there
is
that
requirement.
So
people
that
are
you
know
just
allergic
to
docker,
for
whatever
reason
you
know,
you
know,
probably
won't
work,
but
if
you
have
a
different
run
time,
that's
fine!
If
you
have
a
different
development
process,
it's
usually
fine
and
there
is
a
containerized
version,
a
docker
slim
that
you
can
run
as
a
sidecar
as
well.
So
question:
are
there
restrictions
on
the
stacks
it
supports?
I
see
a
lot
of
languages
listed.
C
I
do
a
lot
of
net,
which
is
not
our.net
expert
is
in
the
chat
that
has
been.
He
can
tell
you
about
his
experiences
with
dot
net,
so
I
will
let
him
answer
that
question,
because
I
am
not
a
dot
net
person.
The
goal
of
docker
slim
is
to
work
with
any
language,
so
any
container
any
language
I'll
say
you
know,
works
really
really
well
right
out
of
the
box.
With
you
know,
kind
of
web
server
style
applications,
apis
websites,
so
node.js
containers,
python,
flask,
django
stuff,
like
that
works.
Super
super.
C
Well,
obviously,
works
really
really
well
with
go
it's
written
and
go.
You
know
it
definitely
works
with
other
languages,
but
you
know
the
levels
of
complexity
and
the
sort
of
results
you
get
can
vary
a
little
bit.
Data
science
containers
work,
but
they
take
a
little
bit
of
of
doing.
C
You
know
they're
also
humongous,
so
some
reductions
in
those
is
is
usually
a
pretty
big
benefit,
but
we've
done
them
in
r
and
with
kind
of
jupiter
style
data
science
containers,
but
you
know
that's
a
little
bit
of
a
further
out
their
use
case.
So
yeah
give
it
a
shot.
Let
us
know
what
you
think:
we
have
a
discord
for
both
docker
slim
and
we
have
one
for
slim
ai.
C
B
Right
then,
let
me
share
my
screen
again.
Let's
pick
up
this
okay,
we'll
just
wait
nicely.
So
here
we
go
so
here
I
am
logged
into
the
slim
developer
platform
and
what
we're
going
to
do
is
we
talked
about
exploring
and
diffing
containers,
and
I
talked
about
you
know
you
can
upload
your
x-ray
reports.
Well,
that's
right
here
on
the
home
page
once
you're
logged
in
it
even
tells
you.
The
command
here
to
you
know,
generate
your
x-ray
report
and
then
you
can
just
upload
it
here.
B
Now,
I'm
not
going
to
show
you
this
because
we're
going
to
go
into
more
detail
in
a
couple
of
other
areas.
So
what
you'll
see
at
the
top
here
is
connectors.
What
I
decided
to
do
is
I
pushed
my
fat
and
slim
container
images
to
docker
hub.
You
can
use
a
number
of
different
registries
and
we'll
take
a
look
here.
If
we
look
at
mine
you'll
see
that
I
have
mine
connected
to
docker
hub
at
the
moment,
but
you
could
use
gcr,
ecr
or
key,
so
there
we
go.
B
B
So
I
pushed
my
fat
container
and
my
slim
container
to
docker
hub
earlier
and
what
we're
going
to
do
now
is
then
I
added
them
to
my
favorites.
So
I
I
will
show
you
how
I
did
that
they're
already
there,
but
you
can
click
on
this
ad
and
then
I
can
see
what's
inside
my
connected
repositories.
Here's
our
slim
demo
and
here
are
our
two
container
images
and
I
connected
those
up
and
added
them
to
my
favorites
earlier.
B
So
if
we
go
back
to
the
favorites
here,
here
is
the
slim
demo,
the
prod
slim-
and
here
is
the
slim
demo,
the
prod
fats.
This
is
the
fat
container.
So
let's
start
with
the
fat
container
and
we'll
click
this
here
to
analyze
and
view
this
container
and
we'll
pull
out
a
few
things
that
are
worth
looking
at
at
the
sort
of
a
high
level.
B
So
you
can
see
here
all
of
the
layers
that
were
used
to
construct
this
container.
But
if
we
just
look
at
the
overview
very
quickly,
we
can
see
you
know
the
user.
We
can
see
what
it's
built
upon.
We
can
see
what
ports
it
exposes
the
size,
the
working
directories
and
all
of
that
good
stuff
and
then
down
here.
We
can
also
see
the
shells
that
are
included.
This
is
an
important
bit
of
information
again
when
we're
talking
about
that
whole.
B
You
know
containers
being
hijacked
through
to
through
you
know
bugs
in
software,
and
we
can
also
see
the
files
that
have
got
special
permissions
and
certificate
bundles
and
a
whole
bunch
of
stuff.
So
at
a
high
level
you
can
you
know
easily-
and
this
is
what
was
inside-
that
x-ray
report
that
was
sort
of
difficult
to
sort
of
scan
as
a
human.
B
But
if
we
now
look
across
at
the
file
explorer
here,
we
can
see
this
is
a
view
of
the
container
with
all
of
the
layers
applied.
But
if
we
step
through
this,
we
can
look
at
layer
0
and
we
can
actually
expose
what
commands
were
used
to
generate
that
layer.
Now.
This
is
obviously
coming
from
the
base
image.
We
ingested
a
base
image,
but
I
think
the
first
five
layers
of
this
container
are
actually
what
we
inherited
from
the
base
image,
so
you
can
even
analyze.
You
know
what
actually
happened.
B
You
know
in
the
construction
process
before
you,
you
know
ingested
it
as
a
base
image.
So
here
we
can
well,
that's
all
going
to
be
one
thing.
So
let's
do
this.
We
can
see
here
that.
B
One
file
was
added,
but
it
gets
interesting
when
we
get
along
here.
I
think
layer,
seven
is
where
we
install
our
pip
requirements
and
we
can
see
that
832
files
get
added
and
they
all
you
know,
sit
in
around
here
oops.
I
didn't
mean
to
click
on
that,
but
you
can
click
on
anything
and
it.
It
shows
you
everything
that's
in
here
and
you
can.
B
You
know,
filter
this
down
to
well,
I
know
flask
was
something
that
got
installed,
so
I
can
just
look
at
the
things
that
were
added
as
part
of
flask,
so
this
is
a
nice
way
to
look
at
the
the
at
the
sort
of
the
whole
image.
This
is
the
fat
container
remember
and
it
also
fully
fleshes
out
the
dockerfile,
including
all
of
the
full
verbose
steps
that
we
inherited
from
the
base
image.
So
you
can
see
precisely
how
this
container
was
put
together.
B
B
Yeah,
so
we
will
look
at
a
diff.
I
was
just
going
to
very
quickly
look
at
the
slim
container
because
there's
a
couple
of
things
in
the
overview
that
are
worth
pointing
out
here
because
of
the
red,
so
the
first
is
the
slim
container
is
a
single
layer
container.
It
has
been
reconstructed
with
just
the
raw
ingredients
that
it
requires
and
in
the
overview
here
we
can
see.
There
are
no
shells
inside
this
container
that
there
is
only
the
temp
directory
with
any
sticky
bits
applied.
B
You
know,
so
you
can
immediately
see
a
lot
has
changed
and
you
can
go
through
the
file
explorer
process
now,
obviously
everything's
in
a
single
layer.
At
this
point,
so
it's
more
interesting
too,
as
pete
says,
go
and
take
a
look
at
a
diff
between
the
slim
and
fat
containers.
B
So
we're
looking
at
a
file
system
diff
here
from
fat
to
slim,
we
can
see
what
was
removed.
So
we
can
see
that
in
the
slim
container
the
requirements.txt
file
has
been
removed
because
well
the
the
minified
container
just
simply
doesn't
need
that
to
operate.
We
can
see
all
of
this
user
space.
That's
just
you
know
now
absent,
there's,
obviously
tons
of
stuff,
that's
been
thrown
away,
but
we
might
want
to
actually
sort
of
dig
into
this
to
sort
of,
let's
think
about
some
of
those
scans
we
did
earlier.
B
I
think
we
we
we
knew
that
lib
gnu
tls
was
one
of
the
things
that
had
a
vulnerability.
Well,
if
I
filter
this
list,
we
can
see
well
that
library
has
been
completely
removed
in
the
slimming
process
so
that,
if
we
are
asked,
are
we
shipping
this
version
of
libgia
new
tls
in
our
production
images,
our
bill
of
materials
says?
Well,
yes,
we
do,
but
then
this
tool
enables
you
to
go
and
look
at
the
slim
container
and
actually
come
back
with
the
answer.
B
No,
we
are
not
the
the
slimming
process
is
removing
this,
and
similarly
we
we
know
that
ssh
was
added,
that's
probably
a
bad
bad
way
to
search
for
that
is
it
that
I
can't
even
remember
what
the
libraries
are
now
a
bad
example,
but
you
get
the
idea,
you
know
when
we,
when
we
search
for
things,
we
can
see
that
the
shells
have
been
removed.
C
You
know
I
think
that
was
part
of
the
idea
behind
ai
and
the
platform
and
and
why
we're
building
this
to
help
sort
of
debug
this
slimming
process,
no
matter
what
containers
you're
using
if
you're
using
docker
slim,
if
you're
using
some
other
type
of
container
approach.
Just
to
give
you
some
more
insight
into
that.
You
know
if
people
want
to
see
more
of
the
slim
platform
like
feel
free
to
go
check
it
out.
C
B
And
I
think
that
is
sort
of
the
the
the
main
sort
of
you
know
pieces
that
we
wanted
to
to
show
there.
So
you
know
what
we've
gone
through
is
looking
at
some
best
practice
with
your
dockerfile.
It
looks
like
we've
still
got
some
learning
to
do
there.
B
Thank
you
for
the
hot
tips
earlier
on,
but
starting
with
that
process,
starting
with
what
is
a
decent,
a
decent,
looking
container
doing
some
security
analysis,
some
s
bomb
generation
and
then
slimming
those
containers
and
analyzing
what
happened
as
a
result
of
that
slimming
process,
and
then
how
does
that
relate
to
the
software
bill
of
materials
that
we
were
generated?
You
know
our.
If,
if
we
are
asked
the
question,
are
we
shipping
library
xyz
in
our
production
containers?
We
can
trivially,
go
and
find
out
the
answer
to
that
question.
C
Well,
that's
awesome!
Thank
you,
martin.
That
was
super
cool.
If
people
in
the
chat
have
any
questions
or
want
to
see
anything
else
again,
you
know
you
can
find
us
at
slim,
devops
on
twitter
or
on
twitch,
and
we
do
a
lot
of
these
demos
and
show
more
of
the
platform
and
do
some
docker
slim
examples
and
yeah.
If
you
have
questions
just
reach
out
taylor,
any
questions
on
your
side.
A
Yeah,
I
think
kind
of
just
more
of
a
you
know:
fun
fun
kind
of
exercise
have
you.
I
know
that,
typically,
when
I've
taken
a
look
at
dr
slim,
they've
said
that
if
you
use
a
compiled
language,
you're
gonna
see
a
lot
more
reduction
in
terms
of
size.
Just
you
know
a
curiosity
question
how
what
is
the
biggest
delta
you've
seen
in
terms
of
like
file
size
reduction?
B
C
Or
70
x,
but
yeah
but
yeah
yeah,
you
definitely
get
some
pretty
small
ones.
So.
A
It's
it's
interesting
to
take
a
look
at
the
space
around.
You
know,
cloud
native,
build
packs
and
docker,
slim
and
kind
of
you
know.
I
wonder
if
we'll
get
to
that
point
where
I'll
be
able
to
kind
of
chain
all
these
things
together,
but
and-
and
you
know
just
have
like-
oh
yeah-
it's
just
it's
just
a
kilobyte
yeah,
you
know
to
ship
around.
Just
so
that'll
be
yeah.
C
Yeah
we
actually
so
we
talked
a
bunch
with
the
build
packs
folks
at
kubecon,
and
you
know
you
can
run
docker
slim
on
build
packs,
built
containers.
You
don't
get
as
much
out
of
them,
because
they're
sort
of
optimized
already,
but
with
the
slim
platform,
you
know
you
can
like
look
inside
and
it's
it's
very
interesting
to
me
to
see
how
those
containers
are
built
because
they
tend
to
be
built
a
little
bit
different
than
like.
If
you
were
just
running
a
docker
file
and
so
yeah
we're
talking
a
lot
with
them.
C
We
actually
just
published
a
blog
post
about
working
with
build
packs.
I
think
they're,
super
cool
probably
do
an
example
on
twitch
with
them.
So
yeah,
that's
a
cool
technology,
and
I
think
it's
complementary
to
the
things
that
we're
thinking
about
and
doing
so.
A
That's
exciting:
are
there
any
kinds
of
I
know
it's.
You
know.
Obviously
it's
always
best
to
kind
of
focus
on
the
workflow
over
the
tools
themselves,
but
are
there
any
other
tools
or
practices,
or
anything
like
that
that
you
that
you,
that
y'all
might
add
to
the
pipeline
or
just
kind
of
find
interesting
on
the
horizon
at
all?
Oh
we've.
B
Lost
pete
so
yeah
we're
we're,
obviously
looking
to
improve
these
all
of
the
all
of
the
time,
and
we've
got
a
number
of
things
in
the
works
at
the
moment.
One
is
integrating
a
git
ops
workflow
into
this
sort
of
minification
process
and
operating
minification
over
a
group
of
containers
via
docker
compose.
B
So
you
can
it
more
common
is
that
you
have
several
containers
that
operate
a
microservice
in
harmony,
and
you
want
to
be
able
to
have
a
new
revision
of
one
of
those
containers
in
that
collection
and
be
able
to
minify
it,
but
then
test
and
validate
it
against
the
other
containers
that
it
relies
upon.
So
we're
that's.
We
I
think
we
put
a
blog
post
up
about
that
last
week.
B
So
yeah
we're
you
know,
that's
one,
one
change
that
we've
got,
which
I'm
quite
looking
forward
to,
because
I
love
that
git
ops,
workflow,
you
know
being
able
to
tag
and
mark
things
in
a
familiar
way,
but
operate
that
on
a
whole
collection.
I
think,
is
going
to
be
cool.
A
Absolutely
agree:
no,
I
can't
wait
to
see
that
those
I
I
always
really
like
to
see
the
gitaps
workflows.
It's
nice
that
you
know
at
this
last
was
it
get
ops,
con
and
and
kubecon
we
kind
of
got
that
formal
definition
of
what
is
get
ops.
You
know,
so
we
can
kind
of
rally
around
that
definition.
So
definitely
one
of
my
favorite
workflows
too
it's
nice
to
be
able
to
have
that
rather
than
you
know,
we
do
things
a
little
differently
around
here
at
every
company.
You
work
at.
B
Right,
yeah-
and
you
know
we-
we've
been
working
with
payment
works,
who
sort
of
a
design
partner,
and
they
have
been
helping
fleshing
out.
You
know
how
that
whole
gitops
workflow
should
function
and
they're
already
making
good
use
of
it,
but
that's
a
feature:
that's
behind
closed
doors
at
the
moment
that
we'll
be
unveiling
very
soon,
but
we
do
have
a
white
paper
out.
B
That
explains
exactly
what
we've
been
working
on
and
what's
coming
coming
along
soon,
so
you
can
find
that
on
the
slim
ai
website
and
also
you'll
find
links
there
to
our
youtube
channel,
where
we
did
a
lengthy
interview
with
four
of
their
dev
team
about
that
whole
process,
because
they
were
moving
from
monolithic,
vms
to
a
microservice
architecture.
At
the
same
time
as
moving
to
you
know
this,
this
git
ops
work
for
flow
all
powered
by
the
stuff
that
we're
building
so
it
that
was
a
fascinating
conversation.
A
That's
awesome,
that's
incredible!
Yeah!
I'm
excited
to
see
all
the
all
the
efforts
made
on
those
fronts.
It's
gonna
be
a
lot
of
fun.
Awesome
awesome!
Well,
I
don't
see
any
more
questions
on
that
front,
but
definitely
want
to
thank
you
all
for
watching
today.
Gentlemen,
if
you
have
any
parting
words
or
words
of
wisdom
to
impart
on
the
audience,
I
would
love
to
turn
it
over
to
you
before.
I
I
close
things
out
here.
C
I
guess
I'll
just
say
you
know
we're
a
pretty
friendly
bunch.
We
like
talking
about
containers
and
stuff
like
that.
So
if
you
come
to
slim.ai
you'll
find
links
at
the
bottom
of
the
page
to
our
discord
channels
and
our
twitch
stream,
and
all
that.
So
please
just
come
chat
with
us.
Hang
out.
Let
us
know
what
you
think.
Let
us
know
you
know
if
we
can
make
improvements
and
yeah
it's
just.
B
What
about
you,
yep
all
of
those
things
and
tomorrow
will
be
on
our
twitch
channel
twitch.tv,
slim
devops,
that
will
be
at
3
p.m.
Eastern
time,
8
p.m,
gmt
we're
going
to
run
through
a
version
of
what
we've
done
today,
we're
going
to
expand
a
bit
more
on
that
http
probe
stuff.
We
had
somebody
in
our
community
where
we
ran
into
an
issue
where
their
container
wasn't
being
fully
stimulated,
so
we're
going
to
circle
back
and
try
and
find
a
more
comprehensive
set
of
probes
to
actually
automate
that
stimulation
of
their
container.
B
So
if
you
want
to
have
a
recap
on
this
or
dig
into
things
in
a
bit
more
detail
or
have
questions
about
what
we've
done
today,
that
we
didn't
get
get
time
to
cover
then
come
over
and
see
us
tomorrow
and
at
the
slim
ai
website,
you
can
find
a
link
to
our
discord
and
you
can
always
you
know,
message
us
in
there
and
ask
your
questions
and
we'll
we'll
follow
up
with
you.
A
A
Thank
you.
Everyone
for
joining
the
latest
episode
of
cloud
native
live
it's
great
to
hear
from
peter
and
martin
around
building
analyzing
optimizing
and
securing
containerized
apps.
We
really
liked
the
interaction
and
questions
from
the
audience.
It
was
a
really
lively
bunch
today.
So
thank
you
all
for
making
it
out.
You
know
it
for
for
getting
through
all
the
packets
and
everything
like
that,
and
it
can
get
a
little
congested
from
time
to
time.
A
So
we
bring
you
the
latest
cloud
native
code,
every
wednesday
at
11
a.m.
Eastern,
but
next
week
next
week
we're
going
to
be
off
due
to
the
american
thanksgiving
holiday,
we're
going
to
kick
off
again
on
december
1st,
with
jason
morgan
talking
about
service
mesh
101,
an
introduction
with
linker
d.
Thank
you
for
joining
us
today
and
we
will
see
you
soon
thanks.
Everybody
thanks.