►
From YouTube: TGI Kubernetes 172: Profiling in Kubernetes
Description
Join Bryan as we take a look at Observability as a whole and then dive into the fourth and under-explored pillar, profiling. We'll take a look at some of the basics and see what the community has to offer.
A
If
you
think
about
that
number,
this
means
oh
gosh,
I'm
actually
let
me
mute
myself
in
the
monitor
you
think
about
that
number.
A
Let's
see
here
there
we
go
okay,
much
better,
all
right
there.
We
go
all
right
that
reverb
will
go
away
so,
like
I
was
saying
episode,
172.
A
And
that
reverb
will
go
away
so,
like
I
was
saying
episode,
172.
A
All
right
just
trying
to
work
out
these
technical
difficulties
here,
real
quick
hold
on
one
second.
A
All
right
we're
back.
This
will
be
a
lesson
to
everyone
out
there.
I
had
youtube
up
and
multiple
all
right,
I'll,
get
it
to
myself
again
out
there.
A
Okay,
third
time's,
a
charm
I
apologize
to
everyone.
This
is
what
happens
whenever
you
use
firefox,
safari
and
chrome.
I
actually
had
the
youtube
for
this
up
in
multiple
windows
across
multiple
browsers
and
I
just
could
not
figure
out
where
the
reverb
came
from.
So
thank
you
for
for
joining
us
today.
This
is
episode.
Number
172
of
tgik,
that's
been
going
on
for
years,
and
I've
been
around
for
about
most
of
it.
This
is
my
not
my
first
or
second
rodeo
here,
but
I'm
brian
lyles.
A
A
So
before
we
get
started,
let
me
see
actually
on
today
so
hello,
martin
from
the
netherlands
and
and
chris
from
christoph
from
from
germany
and
ymo,
I'm
bcl
nice
to
meet
you
and
then
harsh
and
and
ian
from
wales
and
juka
from
finland
and
ryan
perry
from
the
us.
A
Thank
you
all
for
joining
me
here
today
and
and
dima
hello.
What
we're
going
to
talk
about
today
is
profiling.
Apps
in
kubernetes,
with
three
exclamation
points
at
the
end,
that's
about
how
excited
I
am
about
this
whole
space
and
the
goal
after
we
get
through
the
news,
I'm
going
to
actually
sit
down
and
let's
talk
about
profiling
and
instead
of
just
saying:
oh
here's
a
product
and
here's
a
project
and
go
look
at
that.
A
You
know
let's
actually
do
something
a
little
bit
different
today
and
let's
take
it
from
first
principles
and
and
try
to
see
if
we
can
work
up
in
a
reasonable
amount
of
time
to
something
that
is
usable
without
further
ado.
Let's
see
who
else
just
joined
in
so
we
have
waleed
hello,
a
libyan
working
in
saudi
arabia.
Well,
you
know
it's
late
for
you.
Thank
you
for
showing
up
and
moz
or
moe's.
I
don't
actually
know
which
one
it
is
hello,
I'm
glad
you
all
could
join
me
today.
A
So
first
thing
we'll
do
is
we'll
get
off
with
some
of
the
news,
and
let
me
actually
share
my
screen
here.
A
So
have
lots
to
talk
about
today.
It's
actually
it
was
a.
You
know.
It's
interesting
that
this
is
a
very
exciting
week
to
tell
you
the
truth
and
and
at
least
in
the
kubernetes
space,
there's
lots
of
good
news
out
there,
and
I
want
to
share
some
of
it
with
you.
A
The
first
item
on
our
list
today
is
that
contour
1.19.1
is
shipped
and
I
actually
will
not
go
to
its
page,
but
the
this
is
actually
it's
the
miner.
Actually,
it
will
go
through
the
page.
Now
I
think
about
it.
It
is
a
minor
release
and
really
it's
now
about
allowing
allowing
this
retry
policy
and
the
number
of
retries
to
be
disabled.
A
The
next
item
on
our
list
here
is
the
prototype,
is
underway
for
moving
cube,
control
or
cuddle
or
ecto
cubecto
out
of
the
tree.
This
is
exciting.
This
is
exciting
work.
We
think
about
that.
You
know
whenever
we
had
github.com
kubernetes
kubernetes
everything
was
there
and
then
things
were
pulled
out
into
you
know:
github.com
keep
control,
you
know
it's
still
pretty
and
it
was
nice,
but
that
was
only
a
mirror.
A
It's
nice
to
see
that
people
are
still
thinking
about
how
to
make
it
work
and
on
its
own
its
own,
and
I
just
think
this
is
a
very
exciting
project.
A
Please
have
a
look
at
what
I'm
gonna
say:
it's
pawna
is
actually
doing
here
and
you
know
send
them
kudos
or
you
know,
if
you
don't
like
it,
let
them
know
why
you
don't
like
it
now.
The
next
item
here
is
github.
A
Universe
happened
this
week
and,
unfortunately,
due
to
the
way
that
the
days
and
with
why
are
I'm
just
always
too
busy
to
sit
around
and
watch
a
conference
online,
but
I
actually
did
put
it
on
my
calendar
to
check
out
the
replays
when
they
come
up
because
it
seemed
like
there
were
some
interesting
things
that
came
out
of
there,
but
I
did
want
to
highlight
emily
freeman,
who
actually
did
one
of
the
one
of
the
videos
they
had,
and
it
was
actually
I
just
wanted
to
share
here-
has
literally
nothing
to
do
with
kubernetes,
but
emily's
video
of
of
computing
through
the
ages
is
a
hundred
percent
worth
the
time
that
it
takes
to
watch.
A
A
Harbor
2.4
general
availability
release,
which
focuses
on
distributed
tracing
for
enhanced
troubleshooting
identifying
performance
in
bottlenecks,
has
been
released.
This
week
there
was
a
a
blog
post
by
actually.
I
should
go
figure
out
who
this?
What
this
person's
name
is,
so
I'm
actually
going
to
click
on
this
link.
A
There's
a
there's,
a
blog
post
about
cue
proxy
and
the
win
user
space
mode,
and
actually
this
is
something
pretty
interesting-
is
that
it's
nice
to
see
that
work
for
continuing
to
make
windows
or
to
make
windows
a
a
very
valid
deployment
target
for
kubernetes
is
still
going
on
and
there's
actually
I've
seen
some
of
these
things
that
are
working
out
and
people
are
actually
putting
good
work
into
there.
A
So
it's
nice
to
see
that
the
q
proxy
is
going
to
support
or
is
supporting
when
user
space
mode.
A
A
This
is
definitely
on
my
list
of
things
to
watch.
Even
when
katie
was
at
combi
nast,
I
was
paying
lots
of
attention
to
the
things
she
says
and
now
that
she's
at
cncf,
I'm
100
paying
even
more
attention
to
what
she
says
and
here's.
This,
just
you
know
it
gets
more
of
a
notice,
is
that
the
code
freeze
for
kubernetes
1.23
is
in
two
weeks
in
about
three
days
on
november
16th.
A
Please
have
a
look
at
the
link
to
understand.
You
know
what's
actually
coming
in
this
release
and
because
you
know
we
all
can't
use
the
latest
and
greatest
older
versions
of
kubernetes
have
been
updated,
so
we
have
updates
so
1.22,
1.21,
1.20
and
1.19.
A
The
chain
change
logs
are
linked
here
for
your
later
perusal.
An
important
bit
here
is
that
the
kubernetes
elections
are
are
still
happening
and
I'm
not
logged
in
with
github
on
this
computer.
So
I
cannot
show
you
this,
but
the
one
of
the
most
important
bits
of
our
community
are
the
people
who
actually
make
our
community
go
and
there's
definitely
some
strong
candidates
out
there
and
we
should
be
supporting.
You
know
the
people
that
that
are
out
there,
making
our
community
better
than
it
is
or
better
than
we
ever
thought
it
could
be.
A
So
if
you
actually
have
the
ability
to
vote,
you
should
be
voting
for
those
people
and
then
finally,
what
we
have
here
is
some
actually
some
alumni
from
from
from
my
company
vmware
went
off
and
started
a
company
called
chainguard
with
some
other
people
from
google
and
they're
getting
a
lot
of
press
lately
so
chain
guard
which
actually,
this
is
a
lot
of
words
here,
chain
card,
zero
trust
supply,
chain
security,
company
they've,
just
kicked
it
off.
A
Definitely
some
good
people
that
I've
worked
with
personally
that
I
would
definitely
endorse.
So
it's
please
read
about
what
they
have
going
on
and
this
idea
of
zero
trust
supply
chain
security
is
actually
pretty
interesting,
because
what
we're
finding
is
that
in
this
day
and
age,
it's
not
your
software.
That
needs
to
be
the
only
secure
thing,
but
really
what
needs
to
be
secure
is
the
methods
that
you
use
to
make
your
software.
A
I
think,
then,
in
other
and
other
verticals
and
like
manufacturing,
we
figured
this
out
long
time
ago,
but
now
in
computing.
What
we're
realizing
is
that
you
know
supply
chain.
Security
is
actually
paramount
and
doesn't
matter
how
safe
your
software
is.
If
I
can
inject
a,
I
can
inject
an
exploit
in
before
you
actually
ship
it.
A
So
please
take
a
listen.
It's
only
14
minute,
podcasts
and-
and
I
will
say
I
definitely
endorse
this
good
group
of
people
and
I'm
very
anxious
to
see
what
they're
going
to
come
up
with
so
now.
This
is
when
we
get
into
the
interesting
bit
and
I
actually
tried
to
figure
out
like
how
am
I
going
to
do
this
this
tgik,
I
can
come
on
here
with
some
some
prebaked
software.
A
A
A
When
it
comes
down
to
observability,
when
we're
thinking
about
like
how
it
is
presented
today,
where
we
have
this
idea
of
these
pillars
of
observability,
so
observability
is
the
roof
roof
and
we
have
pillars
that
are
holding
it
up
and
if
you
think
about
what
plurals
we
have
today,
so
we
have
this
idea
of
of
metrics
and
metrics
are
things
that
your
application
can
tell
the
world
about
itself,
and
you
know
what
is
that?
A
Well,
how
much
memory
you're
using
you
know
how
many
requests
are
we
are
we
seeing
you
know,
you
know
it's
myriad
things
that
we
could
actually
put
there
and
we
can
expose
metrics.
A
The
second
pillar
is
you
know
we
can
actually
we'll
call
that
logging
and
logging,
from
my
point
of
view,
is
the
human
readable
output
from
your
software
that
can
be
used
to
diagnose
how
your
how
your
software
is
performing
at
any
given
time,
and
it
also
cannot
be
human
readable.
I
mean
there's
also
this
ideas
of
semantic
logging,
where
we
still
can
have
human
readable
data,
but
we
can
have
machine,
readable
metadata
around
it,
and
then
you
have
this
idea
of
you
know.
A
Tracing
or
distributed
tracing,
where
not
only
so
now
we
can
actually
go
through
an
applications
processes
process
and
we
can
see
you
know
in
in
parts
of
the
process
like
when
we
came
in
on
a
web
request.
We
can
actually
see
you
know
it
came
in
at
this
time.
It
did
these
15
things.
They
took
this
much
time
and
then
it
went
out
at
this
time
and
you
can
actually,
if
you
use
a
distributed,
thought
with
that,
you
can
actually
say
well
now
I
have
all
these.
A
You
know
these
services
in
the
system
that
are
composed
together
to
make
something
greater
than
the
some
of
their
parts,
and
now
I
can
actually
see
requests
coming
from
a
user
and
I
can
see
them
going
through.
You
know
some
front-end
application
on
even
from
the
browser.
I
can
see
them
going
through
some
kind
of
authentication
service.
A
But
I
don't
talk
about
any
of
those
today.
I
want
to
talk
about
profiling
and
why
profiling?
Well,
if
you
think
about
metrics,
traces
and
and
and
logs,
you
know
they
get
you
part
of
the
way,
but
here's
a
question:
how
does
your
software
work
or
what,
if
your
software
is
slow
in
production?
Why,
if
you
didn't,
if
you
didn't
put
in
the
tracing
at
the
proper
point,
are
you
going
to
know
and
exactly?
Why
is
it
slow?
You
know,
is
it
what
is
it
doing?
What
is
it
actually,
what
resources
is
commanding?
A
What
is
it
actually
trying
to
call
what
parts
of
what
it's
trying
to
call
are
slow
and
we
can
go
even
further
because
we
can
do
cpu
trace
our
profiling.
We
can
do
memory
profiling
too,
and
we
can
think
about.
Well,
if
I
have
on
my
desktop,
I've
ran
this
thing
a
hundred
times
and
it's
fine,
but
when
I
run
it
in
production
at
3am
on
you
know
the
third
thursday.
A
This
thing
blows
up
and
it
uses
all
this
memory.
How
do
I
capture
that
state?
How
do
I
actually
see
what's
in
memory
at
that
particular
time,
and
this
is
what
profiling
is
going
to
to
answer
so
before
I
get
into
this
because,
like
I
said,
I'm
starting
at
the
beginning
on
this
one,
I
wanted
to
go
back
and
take
a
look
at
my
other
screen
and
see
who's
actually
on
the
call
today.
A
So
let
me
scroll
up
just
a
little
bit
so
hello,
carlos
and
hello,
faye
and
dmitry
hello
from
hello
to
you
in
oakland,
california
and
the
podcast
all
right.
I
hope.
Maybe
that
is
pop
himself,
hello
from
new
york
city,
carlos
great
friend
of
mine,
good,
to
see
you
on
today,
carlos
and
just
to
let
you
all
know
this
is
just
the
this
is
the
warning.
A
I
live
not
in
the
country,
but
I
live
close
enough
to
the
country
where
we
have
old
old
facilities,
old
power,
old
water
and
when
it
rains
a
lot,
things
get
a
little
squirrely
here.
I
have
ups
on
everything
and
I
think
I'm
okay,
but
my
power
has
been
out
two
or
three
times
today,
so
I'm
hoping
that
whatever
happens
over
the
next
hours
so
that
we're
able
to
keep
it
going.
A
So
if,
if
I
just
fall
the
face
of
the
earth,
it's
not
anything
serious
and
power
just
went
out
so
hello.
Let's
see
we
got
dimitri
here
again
all
right,
so
I'm
not
going
to
look
at
this
screen
anymore.
I'm
going
to
look
at
this
first,
this
other
screen
and,
like
I
said,
I'm
going
to
start
from
the
beginning
about
thinking
about
profiling
and
kubernetes,
but
to
get
started
with
that.
A
You
know
we
gotta
kubernetes
is
just
one
of
the
vehicles
to
get
us
where
we
need
to
be
it's
that
great
set
of
apis
that
allow
us
to
run
applications
anywhere.
We
want
to
run
them
with
same
networking,
storage
and,
let's
say
I'm
not
worrying
about
the
container.
The
container
format
kubernetes
is
not
the
end-all,
it's
not
the
be-all,
it's
just
actually
a
great
set
of
tools
to
help
us
get
to
where
we
want
to
be
so
before
I
actually
talk
about
the
kubernetes
bits.
A
I
want
to
talk
about
tracing
just
in
general,
and
this
will
be
fun
because
what
I
decided
to
do
like
I
said
this
is
gonna
be
live.
Is
we're
gonna
start
from
a
terminal
today
and
we're
gonna
go
into
an
editor,
but
we're
gonna
start
from
a
terminal
and,
like
I
said,
I'm
just
going
to
make
a
directory
here
and
what
is
this
one?
Pgi
k172?
A
If
you
see
my
type
of
head,
I
actually
did
type
this
earlier
and
then
deleted
it,
because
I
said,
let's
start
from
the
beginning,
and
and
what
I'm
going
to
do
is
we're
going
to
spend.
You
know
probably
most
of
our
time
in
vs
code,
and
I
just
want
to
say,
be
kind
to
me
when
it
comes
to
vs
code.
I
don't
use
it,
I
like
it,
but
I
don't
use
it,
and
so
I
seem
like
I'm
stumbling
around
a
little
bit.
A
It's
not
because
I'm
drunk
or
anything
like
that
or
because
I'm
just
it's
more
because
I
just
don't
understand
vs
code,
so
we'll
get
started.
Do
I
unders?
Do
I
trust
the
authors
of
files
in
this
folder?
I
believe
I
will
trust
the
authors,
because
that's
just
me
so
to
get
us
started
today.
I
was
gonna
dive
in
deep
and
type
out.
This
huge
go
app,
but
I
realized
that
go
is
a
great
language.
It's
one
of
my
favorite
languages.
A
It's
not
easy
to
get
started
and
show
people
ideas
and
a
language
that
is
good
for
that.
Swift
is
good,
but
I'm
not
going
to
type
out
anything
in
swift
today,
because
I
don't
want
to
launch
xcode,
but
I
want
to
type
out
we're
going
to
do
a
javascript
app
and
I'm
going
to
use
this
javascript
app
to
actually
set
the
set.
A
The
stage
for
this
conversation
around
profiling
and
what
this
javascript
app
is
going
to
do
is
go
and
it's
going
to
do
it
poorly
and
please
ding
me
whenever
you
see
what's
going
on
with
the
with
with
this
javascript
app,
is
that
I
want
to
be
able
to
it's
going
to
do
two
things:
it's
going
to
be
able
to
create
a
user,
and
then
we're
going
to
be
able
to
verify
that
user
and
the
caveat
the
caveat
is
this:
the
caveat
is
because
I
don't
want
to
do
a
lot
of
typing.
A
I'm
going
to
use,
gets
for
everything,
and
I
know
gits
are
are
supposed
to
be
item.
Potent
so
you're
going
to
see
me
pass
data,
but
get
that
you
should
be
doing
in
a
post,
but
you
know
what
this
is
not
about
javascript
today.
This
is
about
our
good
javascript.
This
is
just
about
I'm
showing
this
idea
of
profiling
and
and
we'll
use
that
as
our
first
tool
to
get
us
understanding
what
actually
what
we
mean
by
profiling.
A
This
is
a
brand
new
2020.
What
yours
is
2021
macbook
pro
14-inch,
it's
the
max.
It
has
64
gigs
of
storage
and
it
had
of
memory
and
two
terabytes
of
storage.
This
is
the
fastest
computer.
I
own.
If
you
have
an
opportunity
to
figure
out
how
to
finagle
one
of
these
go
get
it.
I
will
tell
you
that
I'm
actually,
the
computer
in
front
of
me
is
a
28
core
mac
pro
I'm
not
kidding
28
cores,
I
have
20.
I
have
320
gigabytes
of
memory.
A
In
this
thing,
this
computer
compiles
my
swift
app
that
I'm
not
talking
about
yet
faster
than
this
mac
pro
does,
and
it's
just
ridiculous
what
technology
does
so
if
you
get
an
opportunity
to
get
one
of
these
things,
get
it.
Oh
my
gosh,
it's
amazing
all
right
with
that
being
said.
Let's
type
some
code.
A
So
we're
in
we're
actually
in
vs
code,
and
it's
nothing
in
here.
It's
all
just
a
a
project
that
has
nothing
in
there
and
what
I'll
do
is
I'll.
Let
me
make
a
let's
make
a
directory
for
our
node
app
and
node
is
notice.
Particular
I
wish
I
could
just
like
just
start
typing
things,
but
we're
actually
going
to
create
a
a
package.json
file,
so
we
can
download.
You
know
one
dependency
so
before
we
need
to
do
that.
We
can
do
that.
A
A
So
so
the
the
node
package
manager
knows
how
to
be
able
to
do
what
it
needs
to
do
and
I'll
just
get
started
real,
quick
here
and
just
say
that
hey
we're
going
to
this
is
going
to
be
a
web
app,
and
I
don't
want
to
think
about
web
apps.
So
what
I'll
do
we'll?
Just
put
express
and
express
is
just
a
easy
to
use
web
server
and
notice
that
I
installed
express
and
it
put
it
downloaded
50
packages.
A
My
lord,
that's
just
crazy,
but
I'm
gonna
use
express
to
be
able
to
just
have
a
web
server
here
and
let's
actually
look
at
this.
How
big
is
this
for
my
web
server?
I
downloaded
2.4
megabytes
of
text.
That's
that's
great,
but
that's
not
what
we're
here
to
talk
about
and
what
we're
going
to
do
is
we're
going
to
put
this
file
we're
going
to
put
this
all
in
one
file.
A
A
A
So
we'll
start
off
and
we'll
just
require
express.
This
is
how
you
do
it
old
school.
A
lot
of
people
are
using
the
new
school
methods
where
you
can
actually
do
things
like
where
you
can
actually
do
the
imports,
but
we're
going
to
do
it
old
school
with
the
with
the
requires
and
we're
going
to
have
us.
A
This
basically
saying
express
and
then
we're
going
to
create
an
app
from
our
express-
and
this
is
one
of
the
things
I
like
about
javascript,
and
this
is
why
I'm
using
it
is
that
you
can
actually
be
very
productive
with
not
a
lot
of
code.
So
what
I've
done
here
is
I've
created
this
const
called
app,
which
is
basically
an
instance
of
my
express,
so
it's
an
app
as
it
is
in
express,
and
I
and
I'm
basically
defining
a
port.
A
So
we
could
actually
change
it
with
an
environment
variable
later
and
then
what
I'm
going
to
do
is
I'm
going
to
just
start.
My
web
server
app.listen
listen
and
we
can
listen
on
that
port
and
you
don't
have
to
do
it
this
way,
I'm
just
going
to
say,
listen
on
all
the
ports
in
case.
I
want
to
run
this
in
kubernetes
later
and
we'll
just
console
log
that
the
server
is
up.
A
So
maz
is
asking:
what's
the
plan?
Are
you
planning
on
profiling,
a
javascript
app
or
golang
app,
both
be
patient?
It's
coming.
I
gotta
work.
I
gotta
work
this
in
there.
I
gotta
work
it
up.
So
what
we
have
here
is
that
we
just
have
web
server.
A
So
if
I
go
node
index.js
and
it's
going
to
say,
the
server
is
up
and
if
I
were
to-
and
we
will
need
another
window
here
so
if
I
were
to
go,
curl
http,
localhost,
8000,
I'm
going
to
say,
cannot
get
because
we
haven't
described
any
of
the
any
of
the
endpoints
that
we're
going
to
need
and
the
first
endpoint
that
we're
going
to
do
is
we're
going
to
do
an
endpoint
for
creating
a
user,
and
so,
like
I
said
before,
we're
going
to
use
gets
when
we
should
not
be
using
gets
or
so
don't
call
me
out
on
that.
A
One
we're
going
to
do
a
get
to
user,
slash,
new
and
it'll,
create
a
user
and
the
user
is
username
equals
query.username.
A
And
what
we're
going
to
do
with
that?
What
we're
going
to
do
with
that
user
is
when
we're
creating
the
user?
Is
we
don't
have
any
storage?
I
don't
want
to
be
persistent,
we'll
just
say
it's
going
to
be
all
memory,
we're
just
going
to
store
it
in
a
variable.
So
before
we
do
that,
we
should
probably
check
our
input.
So
if
not
username
or
not
password
or
not
user's,
username.
A
That's
where
we're
going
to
store
it,
we're
just
going
to
return
and
we're
going
to
res
and
we're
going
to
say
that
the
status
is
400
and
basically
we're
going
to
say
if
you
didn't
get
any
of
those
right.
That's
your
problem
and
we're
just
going
to
put
them
in
an
object,
and-
and
this
is
where
it
gets-
I
don't
know
this
is
why
I
like
javascript,
because
I
can
do
some
of
these
things
so
quick.
So
what
we're
going
to
do
now
is
we're
going
to.
A
We
need
to
create
assault,
because
we
don't
want
to
store
the
password
in
memory.
What
we
really
want
to
do
is
we
want
to
store
the
the
password
and
it's
the
password
salt
and
it's
hash.
So
the
first
thing
we're
going
to
do
is
we're
going
to
create
a
random
salt
and
we're
going
to
and
we're
going
to
convert
it
to
base
64.
A
all
right.
Let
me
see
here
and
we
got
it
to
base64
and
then
what
we're
going
to
do
is
we're
going
to
make
a
hash
and
we're
going
to
call
this
crypto
or
we
actually
need
to
import
that.
Let
me
import
that
export
crypto
cons,
crypto
equals
required
crypto.
There
we
go
now.
We
can
use
it,
we're
going
to
do
crypto
and
we're
going
to
go
pb
pb
k
df
to
sync.
A
So
this
is
basically
we're
creating
this
hash
with
this
pbe
kdf
two
and
we're
gonna
call
it
synchronously
and
so
we're
gonna
pass
in
a
password.
The
salt
we're
going
to
do
10,
000
iterations,
on
on
this.
It's
going
to
be
the
key
length
is
going
to
be
512
and
we're
going
to
use
sha
512.
A
we're
not
going
to
use
shock
256
anymore,
because
someone
was
able
to
do
a
collision
with
that
and
it's
not
safe
and
then
we're
going
to
take
this
username
and
we're
going
to
put
the
salt
in
the
hash
in
there
as
an
object.
This
is
what
I
like
about
javascript.
That
is
that's
some
good
looking
code
right
there
and
we
can
do
10
status
200..
A
A
So
that's
the
first
bit
and
then
what
we're
going
to
do
is
we're
going
to
create
another
function
that
will
actually
we'll
call
it
session
or
an
endpoint
called
session.
So
we'll
go
slash
session
and
we'll
watch
that
and
we'll
go
look,
get
the
request
res,
so
we
can
actually
use
both
of
those.
A
I
clear
off
some
of
these
windows
on
my
screen
and
what
we'll
do
here
is
on
the
session
is
we'll
do
kind
of
the
same
thing
we
did
before
we'll
get
the
username
equals
request.query.username
or
empty,
let
password
equal
request.query.user
term
password
or
empty
username,
and
then
what
we'll
do
is
we'll
do
kind
of
the
same
thing
we
did
before
so
username
password
or
if
the
user
does
not
exist
in
our
our
little
in-memory
in
our
little
in
memory.
A
Object
here,
we'll
just
we'll
just
basically
return
a
res.end
status
400.
and
the
reason
we
do.
That
is
because
we
don't
want
to
do
a
404.
The
user
doesn't
exist
because
we
don't
want
people
to
be
able
to
explore
what
users
actually
do
exist
and
and
then
what
we'll
do
is
now
we're
going
to
do
kind
of
the
same
process
that
we
did
before.
But
now
we
gotta
check
that
the
password
when,
whenever
it
is
actually
hashed,
equals
the
hash
that
we
have
in
the
memory.
A
So
real
quick,
we
can
just
go
solve
a
hash
equals
users.username
username.
Then
we
can
do
constantcrib
hash
equals
crypto.pdfsync
and
password.
I'm
assault
I'm
on
ten
thousand
comma.
A
So
there's
a
couple
ways:
javascript
is
really
really
expressive,
so
I
can
do
const
is
equal,
equals
crypto,
dot
timing,
safe
equals
the
og
hatch
with
my
encrypted
hash
and
then
what
I
can
do
is,
I
can
just
go
return
res
dot
send
status
and
it
is
equal
and
we
can
use
a
ternary
here
and
we'll
just
say,
200,
if
not
4
401
and
that's
the
first
version
of
this
app.
So
how
do
we
run
this
app?
Like
I
said
before
we
can.
A
If
we
go
back
to
the
original
screen,
we
can
go
node
index.js
and
then
now
we
can
test
it
out,
and
this
is
how
we're
going
to
test
it
out.
We'll
just
do
curl
http,
localhost,
8000,
slash
user,
slash,
new
and
then
we'll
go.
Username
equals
brian
password
equals
tgik
and
we
should
go
back
and
what
this
will
do
is
it
creates
a
username.
A
Oh
there
we
go.
I
had
a
bang
where
I
did
not
deserve
a
bang,
so
we'll
go
ahead
and
create
this,
and
it
says
okay,
because
the
return
to
200.,
if
I
try
to
create
it
again,
I
get
a
bad
request
now.
What
we
want
to
do
is
we
want
to
actually
validate
this
our
session,
so
I
thought
it
that
we
could
just
type
we
could
just
like
curl
http
localhost
8000
slash
session,
and
then
I
actually
typed
this
in
earlier.
A
So
if
I
type
this
in
it
says
it's
okay,
if
the
password
is
wrong,
I
get
unauthorized.
If
I,
if
the
password
is
right-
and
it
doesn't
know
well
if
it
doesn't
know
the
username-
we'll
get
bad
request
so
now
what
we
want
to
do
is
we
want
to
test
this
because
we
want
this
needs
to
perform
it
in
production,
so
what
we
can
do
and
I've
already
typed
this
out
here-
is
we'll
go
like
this.
We'll
use
apache
bench
ooh.
A
Am
I
actually
running
this?
Let
me
see
something
here:
username
equals
brian
password
equals
tgik.
So
let
me
just
make
sure
this
is
running.
Restart
this
real
quick
and
do
this
curl
so
create
users
new
and
we'll
do
this
and
we'll
need
to
figure
out
why
this
is
broken
so
slash
session.
A
A
A
A
A
Yeah,
I
am
not
sure
what
is
going
on
here.
I
think,
am
I
on
the
wrong
port
all
right.
Well,
we
will
move
on
so
the
issue.
Actually,
this
is
what
I
will
do.
I
will
actually
just
get
the
older.
I
will
get
my
script
that
I
wrote
before
so
I'll.
Go
back
to
my
index
file.
Go
here
paste
all
that
in
comment
this
one
out-
and
this
is
the
best
part
about
doing
these
things
live.
A
I
wrote
notes
just
in
case.
I
got
myself
into
a
pickle
and
because
I
want
to
be
prepared,
but
you
never
know
whenever
what's
going
to
happen
whenever
you
code
live
so
now
we
can
now
we
can
actually
get
this
so
we're
running
the
server
here
and
now,
whenever
we
run
the
curl.
Let
me
put
this
up
here,
so
we
we
create
I'm
actually
I'm
using
new
user
in
this
one.
A
New
user,
so
we
created
a
new
user
and
then,
if
we
go
up,
if
I
do
this,
actually
I
think
my
machine
is
wrong
wrong
host.
That's
what
the
problem
is.
A
I
don't
actually
know
what
I
did
here,
but
the
problem
here
is
that
this
is
going
to
be
too
slow
and
if
you're,
if
you're,
actually
any
bit
of
a
of
a
javascript
developer,
you'll
you'll
be
able
to
see
what
the
problem
is.
I'm
calling
I'm
making
a
synchronous
call
in
javascript
and
the
problem
with
javascript
is
that
there
isn't
any
threads
javascript's
made
to
run
in
a
browser,
so
in
the
programming
model,
there's
no
threads
and
what
ends
up
happening
is
because
there
are
no
threads.
A
A
And
then
I'm
not
quite
sure
one
is
off
and
one
is
section
and
then
also
the
password
is
different
from
bench
off.
I'm
glad
you
all!
You
know
what
there
we
go.
Dgik,
I'm
glad
you
all
caught
that.
So,
let's
see
here,
let
me
just
try
this
one
more
time
before
I
make
go
into
this
explanation.
A
Oh,
I
know
the
problem
is
good
call,
so
let
me
run
this
again
and
it
says
listening
request
on
localhost
listening
to
requests
on
localhost.
So
if
I
do.
A
A
A
I
wonder:
what's
going
on
with
my
with
my
computer,
this
actually
just
worked.
Did
I
put
an
extra
single
quote
in
here?
Let's
see.
A
All
right,
graphic
and
then
okay,
I
actually
do
have
it
is
listening.
I
think
I'm
actually
having
a
different
issue
here
yeah.
So
this
is
what
happens
when
you
do
things
and
let
me
look
at
apache
bench,
real,
quick,
just
to
make
sure
I'm
actually
calling
it
correctly.
A
Yeah,
I'm
actually
I'm
I'm
seeing
this,
but
so
I'm
getting
lots
of
feedback
here.
What
the
problem
is
is
that
I'm,
actually
I'm
calling
localhost
eight
thousand
if
you
go
to
this
nets.
If
I
go
look
at
netstat
and
I
look
for
eight
thousand,
I'm
actually
listening
on
tcp
for
8
000.,
so
I'm
just
curious
of
why
the
curl
works
which
it
does,
but
the
apache
bench
does
not
work,
and
you
know
what
that's
fine.
A
I
was
actually
I'm
set
up
to
be
able
to
do
this,
so
the
problem
is
and
actually
let
me
just
try
this
one
more
time.
Username
equals
brian
password,
equal
tgik,
says.
A
Yeah,
let
me
actually
try
something
else.
Let
me
get
my
ip
address
on
my
local
machine
because
I
know
it's
listening
on
here.
Let
me
let
me
do
something
here:
real
quick
in
another
window,
ssh
we'll
just
ssh
to
another
machine.
Oh
no!
Not
that
one.
A
A
So
if
I
so,
if
I
go
back
here-
and
I
do
apache
bench,
this
is
my
ip
address
we'll
just
see
if
this
is
the
problem
there
weird
so
what's
happening
here
is
now
this
is
this
call
and
it's
actually
going
out
and
you're
asking
yourself?
Well,
you
saw
the
code.
Why
is
it
taking
all
these
seconds?
A
Why
are
you
taking
all
these
seconds
to
do
this
and
if
you
look
at
it,
it
took
13
seconds
to
be
able
to
make
250
calls,
and
if
we
look
at
the
timing
on
there,
it's
like
the
99th
percentile,
it's
taking
1.3
seconds
to
be
able
to
to
validate
a
user
or
validate
a
user
and
a
password,
and
this
is
a
simple
example,
but
if
we
think
of
a
more
complex
example
of
you
know,
you
have
an
algorithm,
that's
running
in
production
and
based
on
the
data
and
a
permutation
that
you've
never
seen
before.
A
How
are
you
supposed
to
be
able
to
solve
that
and
here's
a
bit?
Here's
the
here's,
the
greatest
bit
metrics
might
be
able
to
show
you
that,
probably
not
in
the
way
that
you
want
it
to.
If
you
were
tracing-
and
you
actually
were
had
a
trace
span,
that
included
where
you
needed
to
be
that
might
work
logs.
Well,
if
you're
not
logging,
it,
it
definitely
won't
work.
So
profiling
is
actually
the
answer
for
being
able
to
solve
this
problem.
A
And
so
let
me
show
you
how
how
we
do
this
in
javascript
and
and
then
I'll,
move
on
to
more
complex
languages,
to
show
you
how
we
do
this
and
then
we'll
show
you
how
we'll
actually
explore
two
ways
of
doing
this
in
a
cluster.
So
the
first
thing
I
want
to
do
is
inside
of
my
package.json
file.
A
I
was
running
node
I
was
running.
I
was
running
node
index.js
to
run
it
before.
Node
has
a
built-in
profiler,
so
we'll
use
nodes
profiler
to
actually
spit
out
some
stuff
for
us.
So,
instead
of
doing
that
way,
we'll
do
npm
run
profile
and
it's
pretty
much
the
same
thing
and
we'll
go
back
to
my
screen
here
and
we'll
go
to
new
user
and
we'll
create
a
user
and
we'll
do
the
apache
bench
again,
and
what
we'll
notice
is
that
it's
still
going
to
take.
A
You
know
a
few
seconds
for
it
to
work
out
properly.
So,
let's
just
let's
let
it
run
and
let's
see
what
happens
so.
100
requests
are
done.
We
should
be
almost
at
200
and
then
that
should
only
take
half
a
bit
so
a
little
bit
slower
this
time.
So
now
we
have
99
tiles,
a
second
and
a
half,
but
instead
of
guessing
now
what
we
have
in
our
directory.
A
Okay
in
our
directory,
what
we
have
is
this
file
here
called
isolate
something
on
something
log.
So
if
we
actually
look
at
this
file,
the
javascript
people,
the
note
8
and
the
node
v8
people
were
nice
enough
to
basically
write
down
all
the
profiling
stuff
and
instead
of
us
having
to
actually
dive
through
this.
A
What
we
can
do
is
we
can
run
node
with
the
with
the
flag
and
we
look
at
this
and
we'll
just
throw
it
out
to
10
process
dot
text.
What
will
happen?
Is
it
actually
is
going
to
replay
this?
It's
going
to
replay
this
log
file
and
it's
going
to
give
us
some
output
and
because
I
put
it
in
as
I
put
it
in
the
actual,
an
actual
file,
we
can
actually
see
what
was
happening
inside
of
our
application.
A
So
if
we
go
from
the
top,
it's
saying
something
about
shared
libraries.
Maybe
you
had
shared
libraries
that
you
were
loaded
that
caused
things
to
be
slow,
and
if
we
look
at
this,
these
shared
libraries,
even
loaded
together,
are
less
than
one
percent
or
less
than
less
yeah
less
than
one
percent
of
the
actual
runtime.
So
let
me
look
in
the
javascript.
Javascript
is
notoriously
slow
and
we
actually
do
have
two
instances
of
those
in
there
and
what
we
notice
that
it's
actually
a
trivial
amount
of
time.
A
It's
actually
taking
zero
percent.
But
let's
actually
look
at
this
c
plus,
but
well.
Actually,
let's
skip
looking
at
the
c
plus
bit
because
there's
a
cheater
mode
placed
at
the
bottom
at
the
bottom.
It
actually
tells
you
what
took
so
long
and
if
you're
on
a
mac
the
way
that
the
way
that
these
things
work
is
that
they
actually
are
moving
memory
around
in
an
interesting
way.
A
So
if
you
use
this
on
linux,
you
actually
won't
see
these
first
two
lines,
but
if
you
do
it
in
a
mac,
what
you
can
see
is
that
97.9
of
the
call
time
was
that
pbk
df2
method
and
like
I
was
telling
you
before
the
reason
that
is
is
because
javascript
wants
you
to
make
a
synchronous
calls
it
likes
callbacks.
If
you
have
things
that
block,
guess
what
it
blocks
everything
and
it
actually
tells
you
about
where
it
was.
So
if
we
look
at
app
index.
A
So
if
I
actually
go-
and
I
look
at
this
line-
it's
going
to
bring
me
to
the
stop
at
the
top
of
this
this
method
here
and
we're
going
to
notice
that
basically,
it's
line
38.
That
is
the
cause
of
the
problem,
and
you
know
how
can
we
fix
this?
Well,
I
could
actually
walk
you
through
and
say.
Well,
what
we
can
do
is
we'll
actually
use
a
synchronous
method,
a
synchronous
method
of
ppkdf
to
sync
and
that's
the
solution,
but
yeah.
A
So
let
me
show
you
how
to
do
that,
and
instead
of
just
typing
it
all
in
since
I've
copied
and
pasted
it
I'll
just
do
like
this,
and
I
have
a
new
version
right
here
in
the
file
and
what
it
does
is.
This
is
the
asynchronous
version,
so
it
makes
the
same
call
it
spits
the
hash
back
out
and
then
it
calls
this
function.
A
So
that
means
that
whenever
this
line
is
called
that
it
will
release
it'll
release
control,
so
something
else
can
run
and
and
what
that
really
means
is
that
whenever,
if
hopefully
that's
the
case-
and
that
was
the
that
was
the
reason
why
we
had
this.
A
A
So
let's
use
this
word
right
and
we
use
this
file
and
we'll
put
this
out
into
process.text
again.
The
first
thing
we're
going
to
notice
is
that
this
thing
ran
way
quicker.
That's
because
there
was
nothing
in
there.
Let
me
actually
do
my
new
user
magic
first
and
then
we'll
do
off.
A
So
the
first
thing
we're
going
to
notice
is
that
the
99th
percentile
for
for
this
run
is
now
13
milliseconds
rather
than
you
know,
either
a
second
and
a
half
or
or
even
longer
or
1.3
seconds.
A
So,
just
that
one
set
of
change
that
one
change
there
on
that
line,
58
actually
made
it
so
that
it's
it's
way
faster
now
and
the
only
way
we
were
able
to
see
that
in
the
easy
way
was
with
profiling
and
I'm
sure
that
if
you're
a
seasoned
developer,
you
would
have
seen
that
instantly
that
we
were
making
a
synchronous
call.
But
what
about
the
calls
that
are
not
obvious
and
that's
why
profiling
is
so
important
and
that's
why
we
put
so
much
effort
into
doing
profiling.
A
So
that's
that's!
That's
phase,
one
of
of
our
project,
but
you
know
something
I
didn't
come
to
just
talk
about
javascript
today
I
also
came
to
talk
about
go
and
kubernetes,
so
the
first.
So
the
next
thing
we
need
to
do
here
is:
let's
cancel
all
this
there'll
be
no
more
go
or
no
more
javascript
in
this
in
this
demonstration
and
we'll
make
a
we'll
make
a
go
app
for
me.
A
Spending
so
much
time
in
this
in
this
cloud
native
space
and
spending
so
much
time
around
kubernetes
actually
goes
a
lot
easier
to
reason
with
these
days,
and
so
what
we'll
do
is
we'll
just
create
one
file
and
we'll
use
this
file
for
pretty
much
the
rest
of
the
time,
and
what
we're
going
to
do
is
we're
going
to
create
a
very
the
simplest
web
server
that
I
think
of
that
that
we
can
actually
create
and
go
so
there's
a
couple
things
I
would
like
to
do
first,
because
I'm
not
running
in
gopath.
A
What
I
need
to
do
is
we'll
just
create
this
as
a
module
and
and
but
we
won't
actually
import
anything.
So
the
next
thing
we
want
to
do
inside
of
our
file
is:
what
is
it
package
main
and
because
we're
only
going
to
have
one
package
in
here
and
next
thing,
we're
going
to
do
is
we're
gonna,
have
funk
main
and
so
we'll
have.
A
I
have
that
as
well
and
actually
now
that
we
got
that
far.
What
I
would
like
to
do
is
I'll
just
copy
and
paste
the
rest
of
it
in
and
I'll
and
then
I'll
just
walk
through
it
and
we'll
see
what
is
actually
going
on
here.
A
So
if
I
go
go
run
main.go,
you
don't
see
anything
but
because
I
know
in
this
that
we're
running
on
port
10
000.
If
I
do
curl
http
port
10
000,
I
should
get
some
kind
of
request,
welcome
to
the
home
page
and
that's
exactly
it.
But
let
me
walk
through
this
source.
Real
quick
before
I
go,
move
on
to
and
show
you
what
the
next
section
is
going
to
be.
So
we
have
this
application
that
we
want
to
run
in
kubernetes.
A
A
So
how
would
I
actually
be
able
to
see
what's
going
on
in
this
in
this
in
this
application,
when
it's
running
inside
of
a
cluster
first
we'll
need
to
get
it
in
a
cluster
and
then
we'll
need
to
figure
out
how
to
get
information
out
of
it?
And
how
this
runs
here
is.
A
This
is
the
simplest
web
server,
so
I
put
the
old
school
p
prof
stuff
in
here
from
from
from
from
the
from
the
go
standard
library
we
won't
be
using
that
in
here,
I
want
to
be
able
to
show
you
know:
there's
got
to
be
a
better
way
that
we
can
actually
manage
things
without
being
able
to
expose
these
types
of
things,
because
guess
what,
in
a
lot
of
cases
the
operating
system
knows.
A
Make
it
a
little
bit
bigger
and
and
we'll
talk
about
something
that,
I
think
is
a
lot
of
people
are
hearing
as
a
buzzword,
but
aren't
really
understanding
what
epvpf
is.
So.
A
Let's
talk
about
this
for
a
little
bit,
what
it's
hard
to
say!
Ebpf
many
many
years
ago,
the
best
operating
system
ever
created
was
created.
It
was
called
freebsd
and
in
the
late
90s
I
could
have
sworn
I
was
using
both
linux
and
freebsd,
and
I'm
like
how
would
anyone
want
to
use
linux?
Freebsd
is
so
much
better
the
way
that
they,
they
structured
the
kernels.
The
way
that
you
could
actually
interact
with
the
user
space.
A
It
was
amazing,
but
one
thing
that
came
out
of
bsd
land
was
this
thing
called
the
berkeley
packet
filter.
The
folks
at
berkeley
have
created
all
this
amazing
thing
and
what
the
berkeley
packet
filter
allowed
us
to
do
is
at
a
kernel
level.
It
allowed
us
to
write
a
software
that
actually
ran
inside
of
the
kernel
and
that
software
that
ran
inside
of
the
kernel
could
do
all
sorts
of
neat
things
and,
and
actually
in
this
page,
there's
actually
a
a
little
bit
of
a
diagram.
A
So,
like
everything
good,
it
was
ported
from
bsd
somewhere
else,
and
it
was
ported
to
linux,
and
now
we
think
about
well.
What
can
we
do
with
bpf
right
now?
As
an
acronym?
You
know
all
these
neat
things.
People
are
using
it
for
networking,
we
can
actually
get
the
speed
of
not
having
to
be
in
user
space,
and
we
in
the
in
the
lower
level
that
we
can
actually
create
networking
semantics
that
we
need
and
because
we're
at
this
we're
at
the
kernel
level.
A
What
we
can
do
is
we
can
actually
have
applications
that
run
that
we
don't
have
to
understand.
We
don't
have
to.
We
don't
have
to
tag
them
with
any
kind
of
profiling
code,
but
because
they
run,
and
we
can
find
software
that
actually
understands
what
a
go
program
looks
like
so
it
can
actually
go
through.
A
We
can
actually
do
profiling
for
any
type
of
application
or
any
type
of
application.
Other
language
as
well-
and
this
is
the
new
neat
new
place
for
us-
imagine
being
able
to
deploy
an
application
and
then
being
able
to
run
a
system
afterwards
to
figure
out
a
bug
that
popped
up
that
you
had
no
idea,
you
don't
have
to
take
the
application
apart,
you
don't
and
in
the
future
we
won't
have
to
do
things
like
making
applications
dump
core
so
that
we
can
inspect
them.
A
We
can
inspect
them
in
real
time
and
that's
really
where
we
want
to
end
up
and
that's
actually,
the
only
reason
I
wanted
to
do
this
session,
because
I
wanted
to
say
those
words.
We
are
almost
to
the
place
where
we
can
actually
have
applications
that
are
running
that
we
can
inspect
in
real
time
to
understand
exactly
what
they're
doing
and
here's
the
best
part
we
don't
have
to
give
up.
A
You
know
and
we'll
give
up
a
little
performance,
but
we
want
to
give
up
a
lot
of
performance
to
do
it,
because
if
we
think
about
how
profiling
works
in
a
lot
of
applications,
either
either
what
it's
going
to
do.
Is
you
basically
annotate
your
your
application
with
pro
with
profiling
code
and
then
it
will?
If
it's
a
cpu
profiler,
it
actually
will
stop
and
it'll
look
at
things
or
if
it's
a
memory
profile
it'll
actually
snapshot
the
memory
every
once
in
a
while
to
determine
what's
in
there.
A
If
we
can
have
access
to
do
this
at
the
kernel
and
then
the
flexibility
to
change
it
without
rebooting
the
machine.
You
know
we
built
a
powerful,
powerful
set
of
constructs
so
for
the
next
20
minutes
or
so.
What
I
would
like
to
do
is
look
at
two
projects
that
I
see
that
are
very
interesting
in
this
space
and
the
first
one
is
pyroscope
the
io.
A
It's
an
open
source,
continuous
profiling
platform.
The
neat
part
is
that
they
they
actually
are
working
on
implicit
support
for
like
go
python,
java,
ruby,
php
and
net.
That's
a
lot
of
software,
and
but
here's
an
interesting
thing.
They're
also
experimenting
with
ebpf
and
what
makes
it
exciting
and
interesting.
A
Is
this
this
concept
right
here
where,
if
in
the
introducing
go,
if
I
start
a
go
app,
and
I
want
to
profile
it
right
now,
I
have
to
import
prof,
but
if
I
want
to
use,
if
I
want
to
use
like,
let's
say
pyroscope
what
I'll
need
to
do-
and
I
want
to
use
their
go
support,
what
I'll
need
to
do
is
I'll
actually
have
to
annotate
all
my
code,
I'll
annotate,
every
instance
of
my
code
with
their
with
their
with
their
source.
A
So
that
means
I
have
to
change
every
single
piece
of
software
and
they're
realizing
what
ebpf
that
I
won't
have
to
do
that.
The
second
piece
of
software
that
we're
going
to
use
is
parker.dev
and
what
parka
is
is
basically
the
similar.
It's
a
similar
premise
that
in
that
they
basically
want
to
be
able
to
look
at
programs
running
in
the
running
in
a
kubernetes
cluster
or
anywhere
else
really
and
be
able
to
show
you
what's
going
on,
not
just
now
but
over
time.
A
So
let's
go
take
a
quick
look
at
both
of
these
applications
and
let's
see
if
we
can
get
them
working,
and
this
will
be
the
fun
part
like
how
do
we
get
this
working
so
I'm
in
an
interesting
spot
right
now
I
love
computers.
Like
I
showed
you,
my
computer,
I
showed
you
my
new
computer,
but
it's
like.
A
Where
am
I
going
to
run
kubernetes
and
one
of
the
reasons
I
have
this
big
computer
that
I'm
sitting
at
the
desktop
on
that
I
have
my
desktop
is
so
I
can
actually
have
a
you
know,
decent
size
clusters
or
anything
else.
I
want
to
do
or
virtual
machines
at
my
hand,
at
my
fingertips
at
any
time.
I
don't
have
to
go
out
to
a
cloud
I
can
just
have
them
on
my
desktop.
A
There
we
go
that's
what
I
want
to
do:
cube
tgik,
172
start
dash,
driver
equals
parallels
and
you're.
Saying
brian,
you
work
at
vmware.
Why
are
you
not
using
vmware
fusion?
In
this
particular
case?
I
like
how
parallels
does
networking
so
I
use
fusion.
I
have
them
both
installed
in
this
machine,
but
in
this
case
we're
just
going
to
use
this
and
someone's
saying
well.
Why
am
I
not
using
hyperkit?
Well,
the
reason
why
is
because
I
wanted
real
linux.
I
wanted.
A
I
wanted
real
linux
and
a
virtual
machine
in
this
case
so-
and
this
is
interesting
because
at
my
at
right
now
I
can
I
can
launch
on
google,
amazon
or
microsoft
and
so
gke,
aks
or
eks.
A
I
have
a
100
core
kubernetes
cluster
down
like
on
the
floor
below
me,
and
actually
I
do
use
tanzu
community
on
that
one.
But
I
chose
to
not
boot
there,
I'm
just
going
to
use
a
mini
cube
here
and
but
I
also
use
kind.
It
just
depends
on
where,
in
the
kubernetes
stack
that
I
choose
to
optimize
for
so,
if
I
choose
to
optimize
for
just
applications
running
on,
there
kind
is
nice
and
it's
and
it's
great
if
I'm
trying
to
do
scale
out
or
if
I
want
to
do
some
kind
of
nested
things.
A
That's
what
my
you
know
my
huge
cluster
downstairs
for,
but
if
I
just
want
to
experiment
and-
and
I
don't
and
I
think
I
might
need
some
control
of
of
where
the
I
need
some
control
of
what's
running,
underneath
the
kubernetes,
that's
what
minicube
is
for.
A
So
you
know
what
I
can't
say,
which
one
is
best,
but
for
my
customers,
you
know
my
customers
are
running
telco
at
and
at
cell
phone
towers
and
they're
running
it
in
data
centers
and
they're
running
in
the
cloud
I
choose
to
run
in
all
those
places
too.
A
So
I
built
a
kubernetes
cluster
and
actually
you
know
carlos
santana,
my
cluster,
my
my
house
is
not
very
toasty.
It's
in
the
basement.
It's
actually
only
three
machines,
but
it's
and
it's
actually
not
very
loud.
I
put
a
lot
of
work
into
making
sure
that
I
don't
have
a
lot
of
loud
machines
and
just
a
little
shout
out
to
vmware
in
that
culture.
Our
home
lab
culture
is
ridiculous
here.
A
These
people
that
I
work
with
have
the
hugest
craziest
labs
and
it's
just
so
amazing
what
these
people
can
actually
do
with
them.
I'm
nothing
like
that.
So
I
built
this
cluster.
Eight
cpus
20
gigs
of
memory,
20
almost
20
gigs
of
disk
just
enough
for
what
we're
going
to
do
today,
which
is
run
that
little
go
app.
So
now,
I'm
all
running.
So
if
I
do
okay
get
nodes
because
I
do
the
alias
there,
let's
see
it's
up
and
running,
it's
1.21.2,
it's
not
up
to
date,
but
that's
fine.
A
So
the
first
thing
we're
going
to
want
to
do
is:
let's
actually
deploy
that
app
that
we
just
talked
about.
So
how
do
I
do
it
these
days?
So
I'm
going
to
use
first
thing
I
need
to
do
is
I
need
to
build
an
image,
and
so
I
need
to
convert
this
thing.
I
need
to
convert
this
thing
into
a
go
image.
I'm
actually
going
to
use.
Go
I
mean
co.
Let
me
see
h,
gps.
A
If
you're
not
familiar
with
code
and
you're
building
images
that
use
go
you're
missing
out,
co
is
amazing.
It
means
you,
you
don't
have
to
think
if
you
just
write
applications
in
the
go
way
using
code
makes
this
so
simple,
so
you're
going
to
see
this
in
a
second.
Whenever
I
actually
build
this
image
so
we'll
make
this
that
size
there
we
go.
A
So
I'm
going
to
build
this
image
here,
real,
quick
and,
like
I
said
before,
you
all
got
to
see
it
got
to
see
it
run,
so
I
can
just
go
co-publish
this
directory.
This
is
so
amazing
to
me.
I
don't
know
why
I
love
this
so
much
co
built
it
and
pushed
it
and
like
three
seconds
and
now
I
have
this
image
that
I
can
go
use
and
that's
pretty
dope.
So
actually
someone
I'm
just
looking
at
the
the
chat
here.
Someone
said:
can
I
run
h-top?
A
Yes,
that
is
what
my
machine
looks
like
h-top
I
have
56.
I
have
26
real
cores,
26
or
28
real
course.
28
hypotheticals.
A
I
don't
I'm
using
65
gigs
of
real
memory
right
now,
and
I
guess
all
the
rest
of
this
is
cash.
So
if
you
could
just
see
my
other,
my
other,
if
you
could
see
my
other
window,
it
would
be
pretty
crazy
of
what
I
have
running.
I
love
running
everything
and
never
quitting
anything.
A
So
this
is.
This
is
how
I,
how
I
stay
sane.
So
yes,
here
you
go.
That's
an
h
top
for
my
machine,
but
keep
in
mind
this
machine
is
still
slower
than
that.
Macbook
pro
it's
crazy.
So
we
have
that
image
and
if
I
do
a
paste
there,
it
is
so
next
thing
we
want
to
do.
Is
we
want
to
build
a
manifest?
So
we
want
to
do.
A
Let's
see,
okay,
create
do
this
a
lot
so
we'll
just
call
it
go
app
and
we'll
just
use
the
command
line
to
do
this
and
we'll
just
spit
it
out
to
the
screen
to
make
sure
it's
right
and
we'll
go
manifest
app
dot
demo.
A
So
that's
the
first
piece
we
need
to
do
when
you
create
service
the
same
way.
This
is
why
I
love
fish
as
a
shell.
You
notice
that
I
can
my
command
line.
History
is
is,
is
pretty
ridiculous.
A
A
There
we
go
and
then
we'll
just
and
because
we're
using
all
the
editors
today,
we'll
just
use
them
now,
no
more
bs
code
get
rid
of
a
lot
of
the
the
extra
junk
split
these
two
up
and
we'll
actually
move
foo
to
go
app
there.
We
go
and
get
rid
of
this
and
get
rid
of
this
and
get
rid
of
this.
So
now
we
have
a
manifest
to
be
able
to
deploy
my
crust.
This
application
into
my
cluster
and
we'll
just
deploy
that
real,
quick,
so
we'll
go.
A
Okay,
apply,
f,
manifest
app
and
we'll
do
I
don't
know,
there's
all
four
things
we
could
do
here.
I
could
use
like
octane.
I
could
use
okay
just
get
pods.
A
A
So
I
need
to
actually
association
to
my
mini
cube.
What
was
the
good
old
route
and
we
need
to
go
into
system
b,
and
the
neat
part
about
mini
cube
is
because
it's
linux
on
the
bottom
and
it
does
uses
it
does
use
system
b.
If
you
need
to
change
something,
you
can
actually
change
it,
and
what
this
does
is
I'm
just
changing
it
to
I
I'm
I'm
I'm
having
dns
problems
right
now
at
the
house,
so
we'll
just
move
that
one.
A
So,
let's
see
if
I
can
remember
how
to
restart
system
resolver
before
in
a
couple
seconds,
I'm
actually
going
to
google
another
window.
I
don't
remember
these
things
restart
system
be
resolved.
I
there
are
certain
things
that
I
will
never
commit
to
memory,
and
this
is
one
of
them.
A
A
So
now
this
thing
is
running
in
my
cluster,
so
I
have
an
application
running.
So
what
I
want
to
do
is
in
the
next
couple
of
minutes.
Here.
Let
me
go
back
to
my
my
chrome
browser
here.
Let's
see
if
we
can
get
these
either
of
these
working
for
the
ebpf
and
then
and
we'll
give
each
one
10
minutes,
because
this
is
like
the
litmus
test.
Are
we
to
the
place
where
we
can
actually
get
these
things
running
in
10
minutes?
And
I
don't
know
this
would
be
fun.
A
A
It
looks
like
with
the
eppf
oh
interesting.
There
is
basically
a
pyroscope,
I'm
not
going
to
be
able
to
do
this,
a
pyroscope
process
that
acts
as
it
will
basically
exact
my
process,
so
you
could
do
pyroscope
connect.
Actually,
let's
go
back
down
here
and
see.
If
there's
anything
else,
we
can
do
so.
You
can
run
it
and
you
can
give
it
some.
You
can
give
it
some
variables
here
and
then
you
can
run
the
pseudo-e
pyroscope
exec.
A
Well,
problem
is,
is
that
I
built
my
image
using
co,
which
means
that
I
have
a
digitalis
non-root
likely
running,
so
I'm
not
going
to
be
able
to
do
pyroscope
exec
all
right.
Let's
see,
how
can
we
do
this
in
the
next
30
seconds?
A
What
we
could
do
is
I
could
build
another
docker
file
to
actually
build
this,
and
then
I
could
just
make
it
run
as
root
for
my
demo.
That
would
be
one
way.
Actually
it
doesn't
seem
like
a
good
way
to
do
that.
I
don't
think
we
would
actually
you
wouldn't
really
want
to
now
now
this
puts
us
in
an
interesting
place
where,
if
you
run
this,
that
means
that
you
have
to
run
it
as
root
and
to
be
safe.
A
We
tell
people
to
not
run
his
route,
so
I
don't
know
if
that's
actually
the
best
idea.
Let
me
close
this
window,
so
I
can
actually
see
yes
actually
evan.
I
could
switch
the
code
base
image
yeah,
I
could
do
it,
I
could
do
it
as
an
ephemeral
pod.
A
I
could
do
that
way
or
maybe
I
could
just
use
parka,
because
I
know,
let's
see
what
parka
says
in
this
case,
so
with
parka
what
it
says
that
I
could
have
to
do
is
I
basically
run
these
three
commands
and
then
I
can
go
look
at
their
tutorial.
So
let
me
go
look
at
their
tutorial
and
see
if
it
needs
root
as
well.
A
A
A
So
now
I'm
looking
in
the
I'm
looking
in
the
in
the
chat
here-
and
I
want
to
comment
on
some
of
these
things
so
notice-
I
said
I
didn't
want
to
run
as
root
and
then
we
said
well,
we
can
switch
to
code
base
image
so
when
we
switch
to
code
base
image.
That
brings
in
some
interesting
bits
and
I
like
to
run
the
digital
list
as
much
as
possible.
A
But
now
what
that
means
is
that
I
want
to
make
sure
that
my
developers
and
people
I'm
working
with
run
in
that
const,
that
that
idea
that
they
can't
run
as
root
like
our
normal
modus
operandi,
cannot
be
root
so
yeah.
I
think
we
could
do
that,
but
let's
actually
see
if
we
can
do
things
without
changing
the
way
that
we
operate
and
then
the
idea
that
the
ephemeral
pod
and
actually
the
privileged
daemon
set
could
be
actually
that
would
work
too.
A
A
So
after
scanning
this
real
quick
here,
it
looks
like
this
seems
like
an
easier
install,
so
let's
actually
just
try
their
install
we're
in
this
cluster.
Actually
we're
going
to
use
technology
here
we're
going
to
put
this
one
over
here
and
this
one
over
here,
so
we
can
have
both
at
the
same
time
make
this
one
slightly
bigger.
A
The
second
thing
we
need
to
do
is
we
need
to
apply
this
this
manifest.
A
Once
upon
a
time,
I
wrote
a
tool
that
would
allow
you
to
download
a
manifest
and
inspect
all
the
contents
before
you
applied
it,
and
I
can't
find
it.
This
is
what
happens
when
you
get
older,
so
we're
gonna
have
to
do
this
the
old-fashioned
way.
So
what
we'll
do
is
we'll
go
into
manifest
and
we'll
just
go,
we'll
use,
wget
and
and
we'll
just
look
at
it
by
hand.
A
So
what
we
have
in
here,
we
have
to
create
the
namespace,
because
it
already
did
that.
Then
what
we
get
is
a
config
map
with
some
configuration
for
the
application
itself,
and
then
we
have
a
deployment
and
some
labels
on
there.
It
runs
the
parka
binary
with
that
configuration
that
we
probably
gonna
mount
it
in
yep.
I
see
it
with
here,
so
we're
just
basically
gonna
run
that
and
we
hold
one
hey
parka
people.
If
you're
on
this
call,
you
create
a
namespace
up
here,
oh
crap,
I
mean.
A
So
let's
look
at
this
again,
so
we
have
a
namespace.
We
have
a
config
map,
we
have
a
deployment
and
then
we
have
a
namespace
again.
Oh
I
see
what
they're
doing
they
actually
are
using
some
kind
of
maybe
they're
using
customize
or
something
to
build
this
on
their
way
out
and
and
they're
building
all
the
parts
separately,
because
they
have
the
name
space
there
twice
and
then
it
has
a
psp
interesting
that
psps
are
going
to
go
away.
A
So
it'd
be
interesting
to
see
how
they're
going
to
move
with
this
after
phps
go
away
and
then
we
have
roll,
and
then
we
have
a
roll
binding.
Okay.
What
does
this
roll?
Do?
It
just
has
api
groups
for
policy
and
allows
to
look
at
the
pod
security
policies,
and
then
we
have
a
service
and
it
looks
like
it
runs
on
port
77.
A
So
we'll
take
a
look
at
that
and
then
we
have
a
service
account
that
actually
it's
probably
in
the
role
binding
and
it
is-
and
let's
see
I'm
looking
in
the
in
the
things
here.
The
labels
on
the
namespace
are
the
psp
replacement.
Okay
got
it
got
it
got
it
got
it
got
it
got
it.
A
Yes,
all
right,
so
now
we
look
at
this.
Let's
install
it
so
I
mean
it's:
okay
apply
that
chef
kubernetes,
manifest
mammal
and
there's
kubernetes
saying
what's
up,
we
have
a
little
bit
of
time
for
that,
so
it's
not
dire
emergency,
but
always
nice
to
think
about.
A
We
get
k,
pods,
stash
and
parka
something's
happening,
but
we
don't
know
it
yet
all
right,
we'll
just
watch
that
for
a
second,
oh,
no
need
to
it's
now
running
and
let's
see
what
it
says.
So
we
did
the
parka.
We
did
that
piece.
Oh,
we
need
to
do
a
port
forward.
A
Because
port
forwards
are
not
fun,
so
I
just
I
actually
have
set
up
a
port
forward
on
another
screen,
so
it's
there
so
now,
what
I
should
be
able
to
do
is
go
to
localhost,
77
7v7
up
and
it's
running
nice.
So
let's
close
this,
we
have
too
many
windows
open.
So
next
up
we
have
to
set
up
the
parka
agent
and
what
the
agent
does
is
from
what
I'm
ascertaining
here
is
well.
Actually,
you
know
what?
Let's
not
guess,
what
the
agent
does.
A
A
Okay
runs
on
port
7071,
so
we'll
probably
go
take
a
look
at
that
in
a
little
bit
announced
a
lot
of
local
and
oh
look
at
this
lots
of
local
mounts.
This
is
interesting,
but
very
much
needed.
I
mean
I
can
see
why
this
would
be
needed,
like
you
have
the
bp
fs,
and
you
have
all
these
things
there.
So
you
because
look,
this
is
a
low-level
tool.
It's
gonna
actually
have
to
mount.
A
lot
of
things
makes
a
lot
of
sense.
A
Making
sure
that
all
those
volumes
that
we
had
were
mounted
and
then
we
have
this
pod
security
policy
again
and
then
we
have
another
role
and
a
role
binding,
which
is
role,
do
there's
this
policy
here,
okay,
got
it
and
then
there's
a
service
account
very
similar
to
the
last
file.
So
what
we'll
do
now
is
we'll
apply
this
to
my
cluster.
A
A
Do
it
says
now
that
we
have
everything
up
and
running
we're
going
to
set
up
a
port
forward
and
because
we
have
multiple
ones
that
could
be
running,
but
we
prop
we're
going
to
run
this
command
here
and
I'm
going
to
run
another
screen,
but
I'll
actually
say
what
it
does
for
people
who
are
paying
attention.
We're
going
to
run
this
part
forward,
but
we're
going
to
take
the
output.
A
All
right
there
we
go
there,
we
go
so
I've
moved
to
headphones
here,
and
you
should
be
able
to
hear
me
again.
I
you
know
I
100
apologize,
but
if
you
pull
a
map
and
look
at
the
east
coast
united
states
right
now,
you
see
that
there's
like
a
crazy
storm,
going
up
the
east
coast
where
people
are
flooding
and
I'm
sure
it's
not
going
to
be
great
for
for
some
people.
A
So
I
consider
myself
extremely
lucky
to
be
able
to
do
this
when
there's
literally
no
power
in
my
house
and
we're
running
off
battery
right
now,
but
we're
going
to
get
through
this
so
like
I
was
saying
what
this
port
4
does.
Is
it
basically
just
looks
for
a
parker
agent.
It
gets
the
first
one
and
it
maps
that
in
there,
so
what
I'm
going
to
do
is
make
sure
I
actually
ran
that
command
and
and
that
my
port
forward
is
actually
running.
A
A
The
power
just
came
back
on,
but
I'm
not
gonna
trust
that
so
now
we
have
port
7071
and
if
I
go
to
localhost
7071,
hey
look
now
we're
getting
the
output
from
the
agent
and
it's
just
one
of
the
agents.
Well,
it's
actually
the
only
agent.
So
what
I
want
to
do
is
see
if
the
parking
agent
actually
got
my
thing.
So
let
me
take
a
look
at
this.
A
A
A
Click
on
this
we
can
get
cpu
samples,
pretty
cool,
and
if
I
do
search
what's
going
to
happen,
well
we're
going
to
get
something
out
of
here.
A
So
if
we
look
at
this,
you
notice
that
the
top
one
was
parka
itself.
It
looks
like,
and
let's
see
what
else
we
have
on
here,
we
have
cuba
vip
server,
api
server.
We
have
cube
proxy.
A
Oh,
I
will
zoom,
let
me
zoom
a
little
bit
thanks.
Let
me
do
it
like
this.
There
we
go
there
we
go,
so
we
have
all
the
things
that
are
running
in
this
cluster
and
if
I
go
towards
the
right,
let's
see
oh,
I
can
see
my
go
app.
A
So
if
I
know
these
folks
like-
I
think
I
know
these
folks,
let's
see,
I
wonder
if
I
can
do
something
like
type
namespace
equals
default
in
here
and,
let's
see
what
happens.
Oh
gosh,
this.
I
love
good
defaults.
A
So
now
I
can
now
see
my
go
app
that
we
deployed
it's
being
profiled,
but
you
notice
that
it
says:
there's
no
samples
and
the
reason
why
is
because
we
haven't
done
any
work,
see
we
haven't
worked
that
thing
out
at
all.
Yet
so
what
we
need
to
do
is
what
we
need
to
do
here.
Real
quick
is.
We
need
to
figure
out
how
to
send
that
thing,
some
traffic,
so
what
I
am
going
to
do
here
is
take
it
service
see
what
services
we
have
running
in
this
cluster.
A
So
we're
going
to
port
forward,
let's
see
so
we're
gonna
go
okay,
port
forward
service,
slash,
go
app
and
we'll
put
that
on
port
10
I'll
put
it
on
10
0001,
so
ten
thousand
one
it
says
service
go
app,
does
not
have
a
ten
thousand
or
one.
Oh,
I
typed
it
wrong.
A
To
10
000
all
right
one
second
here
let
me
go
back
in
here
and
look
at
this
real
quick.
So
if
I
look
at
my
app.
A
Okay,
amp
apply
dash,
f
app.
This
will
make
it
easier
to
type
in
my
other
screen.
A
App
10
000.
there
we
go.
So
if
I
go
to
this
machine
and
I
do
localhost
10
000,
I
should
get
my
welcome
to
the
home
page
and
we'll
do
it
like
you
know,
I'm
hitting
I'm
just
hitting
reply
or
reload,
so
we
got
that
like
10
times
so
now,
if
I
do
a
search
and
I'll
give
this
thing
about
30
seconds
to
catch
up.
My
assumption
here
is
that
we
should
have
caught
some
samples
and
if
we
catch
some
samples,
then
we
can
actually
go
look
at
our
app.
A
So
we'll
give
it
a
second
here
actually,
while
we're
while
we're
waiting
for
that
to
happen,
let's
go
look
at
some
of
these
other
samples
that
we
have.
So
I
have
this
parker
agent
and
it's
just
right
now
towards
the
right
part.
It's
actually
pretty
high.
A
What
can
I
see
on
this
thing?
Oh
well,
here
we
go
so
I
clicked
on.
So
I
clicked
on
it
at
basically,
you
know
nine
something
927
utc
and
now
what
we
get
is
a
neat
kind
of
bicycle
graph
and
now
what
I
can
do
is
I
can
determine
where
everything
where
all
the
cpu
is
being
used
in
my
application.
A
So
if
we
go
from
left
to
right,
let's
look
at
this
one
this
right
here
this
is
this
is
go
doing
what
go
does
so
we
can
ignore
some
of
the
things
that
are
kernel
this,
but
let's
actually
look
in
the
parka
agent
one
and
we
can
actually
see
what
parts
of
the
app
are
actually
taking
lots
of
most
of
the
usage
up.
So
we
see
we
can
actually
see
that
you
know
we
did
not
put
any
of
this
in
our
app
ourselves.
We
didn't
actually
code
this.
A
A
While
we
wait,
let
me
actually
do
an
apache
bench.
I
know
I
have
one
somewhere
there
we
go.
A
So
what
I'm
going
to
do
is
send
it.
I
don't
know
this
is
go
so,
let's
send
it
a
hundred
thousand
we'll
send
it
100
000
requests
at
20
concurrency,
but
we'll
keep
keep
the
lives
on
so
now.
I
actually
have
apache
bench
doing
this,
whatever
it's
doing
so.
What
I
hope
to
see
here
it
says:
can
you
lift
the
cpu
time
consumed
by
each
function?
I
don't
know,
let's
actually
figure,
that
out.
A
Sorry
about
that,
I
don't
I
hit
reload
on
the
wrong
screen,
my
bad
nope,
not
that
one
all
right,
I'm
back
just
making
sure.
A
I'm
just
making
sure
I'm
back
audio
okay,
I'm
back.
I
hit
reload
on
the
wrong
screen.
So
what
I
hope
to
see
in
a
little
bit
is
that
the
graph
actually
goes
up
because
it's
getting
it's
actually
I'm
serving
request.
A
So,
oh,
let
me
hit
search.
Oh
there,
we
go
perfect,
so
this
is
all
all
of
our
synthetic
traffic
that
actually
hit
the
that
hit.
This
cluster,
I
mean
hit
this
application
running
in
the
cluster
and
if
we
go-
and
we
click
on
this-
so
we
clicked
on
this
one
now
we
can
now
see
what
this
application
was
doing.
So
if
we
go
over
here
and
we're
looking
at
the
runtime,
we
can
move
past
these.
A
But
what's
more
interesting
is
from
my
package
itself,
so
my
tiki,
my
tgik
dash
172
dash
go
and
you
notice
that
a
lot
of
so
http
was
the
was
actually
a
very
big
consumer
of
cpu
and
then
serve.
And
then,
if
you
look
down
here,
there's
a
lot
of
buff
io
redline
read
slice.
This
is
because
I'm
just
sending
back
a
whole
bunch
of
text.
So
what
we
were
able
to
see
here
in
my
very
trivial,
like
literally
trivial
example,
is
that
that
that
parka
was
actually
so.
A
If
I
click
on
this
parka
is
able
to
see
what's
going
on
in
my
application,
and
I
don't
have
to
do
anything,
and
the
neat
part
about
this-
is
that
if
I
go
back-
and
actually
let's
do
this
while
we're
on
here,
so
we
will
quit
this
and
we'll
go
ssh.
A
And
what
I'm
looking
for,
because
I
know
it
would
actually
show
up,
is
it
if
the
prolink
process
was
taking
up
a
lot
of
the
cpu?
It
would
show
up
here
and
notice
that
it's
actually
not
showing
up
here,
because
it's
in
the
kernel
and
that
information
is
already
available
so
now
pretty
cool,
that
this
is
able
to
be
able
to
profile
without
actually
taking
up
lots
of
of
cpu.
A
So
this
is
what
I'm
going
to
say.
I
will
I
wouldn't
look
at
pyroscope,
but
I
don't
know
how
long
power
is
going
to
last.
So
this
is
what
we're
going
to
do.
I'm
going
to
write
the
show
notes,
and
I
want
to
say
that
you
know
we
need
to
as
a
community.
We
need
to
think
about
that
there's
more
than
three
pillars
and
when
it
comes
to,
let
me
actually
put
my
serious
face
on
there's
more
than
three
pillars.
A
When
it
comes
to
observability,
you
know
we
figured
out,
we
figured
out
the
shape
of
metrics,
we
figured
out
the
shape
of
logs
and
we
figured
out
the
shape
of
distributed
tracing.
All
three
of
those
have
a
lot
way
more
way
to
go,
become,
become
more
accessible
and
more
used
by
teams.
It
should
not
be
a
challenge
to
do
any
of
those,
but
at
the
same
time
we
need
to
think
about
other
pillars
of
observability,
and
you
know
I
will
say
that
profiling
is
definitely
going
to
be
one
of
those.
A
It
might
not
be
the
only
one,
but
it's
going
to
be
one
of
those
so
we're
looking
at
the
tools.
The
open
source
community
has
already
already
hopped
in
there
hey
brian.
I
would
love
to
do
pyroscope,
but
mother
nature
says
no
today,
but
look
at
this
open
source
right
now
we
have
pyroscope,
we
have
parka.
A
A
This
was
definitely
an
event,
and,
and-
and
really
I
just
want
to
give
a
shout
out
to
all
the
maintainers
for
making
this
great
software
and
making
it
open
source
so
that
we
can
talk
about
it
on
fridays,
and
I
want
to
thank
all
of
you
all,
especially
oh
pixi.
That's
right,
pixie
was
picked
up,
someone's
mentioning
pixie
pixie
was
picked
up
by
new
relic,
not
too
long
ago,
so
there's
definitely
lots
of
people
who
are
trying
to
fill
in
the
space.
A
This
is
a
good
time,
but
I
do
want
to
say
thank
you,
everyone,
especially
everybody
in
europe
and
asia
who
who
join
me
tonight.
It's
late
and
it's
friday,
you
could
have
done
anything
else,
but
you
chose
to
spend
time
here
really
appreciate
that
if
there's
anything
that
I
missed
or
was
wrong
or
anything
else,
you
know
send
me
a
dm
on
twitter.
If
you
could
figure
out,
my
email
is,
send
me
an
email
or
or
find
someone
that
has.