►
From YouTube: # TGIK 151 - Cluster Diagnostics with Crashd
Description
Come hang out with Vladimir Vivien (@VladimirVivien) and learn about cluster diagnostics using the Crashd open source project. We will go over the project and show how you can use Crashd's Starlark-based language to automate cluster interaction when diagnosing problems.
A
Welcome
to
tgik,
let
me
I
want
to
make
sure
that
I
am
heard
and
singing
and
not
just
talking
to
the
void.
I'm
gonna
look
to
make
sure
that
folks
can
see
me.
A
Look
for
a
thumbs
up
somewhere,
so
hopefully
you
can
see
me.
My
name
is
vladimir
vavian.
I
am
with
I'm
with
vmware
I'm
an
engineer
at
vmware,
and
this
is
my
tgik.
This
is
my
first
time
hosting
it.
So
hopefully
it'll
be
special.
A
It'll
be
fun,
and
today
we'll
be
talking
about
crash
tea
and
I'm
not
going
to
get
into
the
details
right
now,
but
if
you're
here
you
either
want
to
hear
about
crash
d
or
you
want
to
see
my
beautiful
face
either
way
welcome,
let
me
check
out
the
actually.
Let
me
do
this.
Let
me
share
my
screen
to
make
sure
that
we
can
get
screen
sharing
going
all
right,
yep
we're
gonna
share.
A
All
right,
okay,
so
yes,
hopefully
you
can
still
see
all
right.
A
So
forgive
me
this
we're.
This
is
we're
using
a
stream
yard
and
I'm
a
little
unfamiliar
with
stream
yard,
but
I
think
I
can
get
it
to
work.
Actually.
Let
me
stop
saying
make
sure
I
just
share
everything
to
make
sure
that
we
are
seeing
you
guys
are
seeing
whatever
I'm
seeing.
Okay,
we're
gonna
do
screen,
share
all
right.
A
So
welcome
again,
once
again,
my
name
is
vladimir
and
I
am
all
right
so
hopefully
you're
seeing
the
episode
notes
and
again
we're
going
to
talk
about
crash
diagnostics
today
or
crash
d.
So
as
the
host,
I
believe
I'm
supposed
to
do
week
in
review.
Actually
I
want
to
make
sure.
A
A
And
I'm
going
to
see,
let's
see
if
I
can
see
it
from
there
anyway,
hang
tight
folks.
We
still
three
minutes
after
so
I'll
I'll
get
everything
set
up
and
then
we
will
keep
going.
But
I
want
to
make
sure
that
I
acknowledge
folks
as
they
come
in
and
I
thought
the
stream
yard
had
a
chat
window,
but
I'm
not
saying
it
here
but
hang
on
hang
on.
Let's
see.
A
All
right
now
I
got
a
quiet
youtube,
so
I'm
not
all
right
awesome.
Now
I
can
see
awesome
all
right.
I
see
from
my
youtube
window.
I
see
folks
trickling
in
hello,
hello,
joe,
hey
man,
excited
to
see
you
too
hello.
Looking
forward
to
this
episode,
awesome
from
waleed
sevy,
hi,
tgik
happy
from
istanbul.
A
Let's
see
who
do
we
have
sevy
hi,
michael
klug,
hi,
loud
and
clear,
joe
okay,
awesome
and
joe?
I
I
thought
there
was
a
way
to
to
see
the
the
comments
in
stream
yard,
but
I
don't
see
it
actually.
Oh,
oh,
oh,
oh
I
see
ice.
Oh
all,
right!
I
got
it
now.
Sorry
folks
get
used
to
the
the
ui
okay.
So
I
see
my
comments
in
in
my
comments
in.
A
In
stream
yard
and
I'm
going
to
kill
youtube
so
we're
not
eating
up
resources,
hello,
everyone!
Yes,
I
see
my
stream
yard
comments
all
right,
okay,
so
we're
six
after
so,
let's,
let's
keep
going.
I
see
folks
keep
trickling
in
hello
from
helsinki
jukka.
Hopefully
I
said
your
name
right,
noel
glad
it
started.
Yes,
somebody
who
says
robert
hi
from
germany
awesome
awesome
awesome.
All
these
folks
come
to
check
to
jack,
to
check
the
show
out
all
right
cool.
So
again
my
name
is
vladimir.
A
I
am
from
vmware
and
today
I
will
be
your
host
for
for
the
tgik
show
and
as
as
I
said
earlier,
if
you're
here,
you
probably
want
to
hear
about
crash
d
and
we'll
get
to
that.
But
before
we
do,
let
me
go
ahead
and
share
my
screen.
A
And
yes,
the
problem
with
this
is:
when
I
leave
the
tab,
I
lose
everything
okay,
so
you
should
be
seeing
my
show
notes,
but
if
not,
I
will
check.
Let
me
check,
please
let
me
know
if
you're
seeing
if
you
see
my
show
notes
whenever
I
move
from
the.
A
A
When
for
the
comments
to
come
in
all
right,
so
let's
do
a
week
in
review,
kubernetes
121
was
released,
so
I'm
pretty
sure
it's
a
lot
of
work
went
into
this
release
just
like
any
other
release
of
kubernetes.
It's
it's.
It's
it's
a
lot
of
work
and
we
have
a
lot
of
dedicated
folks
to
help
out.
So
please
check
it
out.
So
if
you
have
the
link
to
the
show
notes,
you
should
be
able
to
click
on
it
and
and
read
about
the
release.
A
Two
big
deprecations
and
thank
you
for
to
laurie
and
joe
who
put
who
helped
me
with
with
the
with
the
show
notes,
but
two
big
deprecations
were
announced.
Psps
finally,
are
are
on
the
chopping
block
and
then
topology
keys
and
again,
if
you
want
to
learn
more,
take
a
look
at
the
take
a
look
at
the
notes
and
then
also
cri
tools
was
released
with
the
1.21
release.
A
For
1.2,
if
you're
interested
in
joining
the
release
team,
it's
a
great
way
to
have
to
be
involved
upstream
and
and
join
the
fun
of
time.
Sorry,
a
ton
of
fun
and
yeah,
but
it's
it's
hard
work,
but
it's
super
rewarding.
So
if
you're
interested
this
is
the
link
to
the
application
again
open
the
show
note
and
you
can
jump
directly
there
also
a
guide
for
kubernetes
networking.
Let's
see
what
I
did
not
take
a
look
at
this:
let's
see
what
we
got
open:
a
new
tab.
A
And
this
is
from
kevin
sukuchev
and
it's
a
guide
to
kubernetes
networking
model.
I'm
pretty
sure
this
will
come
in
handy
for
folks
who
are
interested
in
in
learning
more
about
the
kubernetes
networking
model.
So
anyway,
if,
if
you're
interested
check
out
the
show
notes
and
take
a
look,
let
me
jump
on
the
stream
yard
to
make
sure
things
are
going:
okay,
zoom
in
a
bit!
Okay,
awesome!
A
Let's
see
all
right!
You
know
what
I
think
I'm
gonna
keep
the
the
youtube
going,
because
that's
the
only
way
for
me
to
know
what's
going
on.
A
All
right,
let's
keep
going
cisco's
intern
system,
modeling
approach,
I'm
guessing
this
is
a
let's
see
what
this
is.
This
is
probably
a.
A
Blog,
yes,
it
is
a
blog
by
dominique
tuarno.
Oh,
this
is
not.
This
is
not
a
recent
one,
but
I'm
pretty
sure.
If
it's
on
the
notes,
it's
probably
useful
again,
it's
it
has
to
do
with
networking
and
networking
policies
etc.
So
go
ahead
and
check
that
out
survey
of
chaos,
engineering
tools
for
kate's.
I
think
that
I
remember
seeing
this.
I
believe
it's
also.
It's
also
a
blog.
Let's
take
a
look
real,
quick
there.
A
You
go
so
open
source
solution
for
chaos,
engineering
which
we
know
gas
engineering,
is
becoming
very
popular
as
a
way
to
to
make
sure
that
you
have
a
reliable
cluster.
So
this
is
a
blog
from
vasily
marner
from
flint,
so
it
looks
like
a
pretty
lengthy
write-up,
so
I'm
pretty
sure
there's
some
good,
informative
tidbits
in
there.
So
take
a
look
at
this
blog
about
open
source
solution
for
chaos,
engineering
and
kubernetes.
A
All
right.
Let's
keep
moving
down
the
line.
Why
I
run
django
on
kubernetes?
That's
a
good
questions.
Let's
see,
why
do
I
want
to
run
django
on
kubernetes,
so
this
one
is
from
who
is
this
from
anthony
simon?
A
I
just
read
the
name
from
the
from
the
url:
let's
see
it
looks
like
a
good
write-up.
A
Why
kubernetes
is
the
right
spot
to
run
django,
and
you
know
if
you're
not
familiar
with
django,
it's
a,
I
believe,
it's
a
framework
for
python
to
create
web
framework.
It's
I
think
it's
a
web
framework
for
python
so
anyway,
if
you
want
to
write,
if
you
want
to
learn
about
why
anthony
simon
is
using
django
and
python,
please
take
a
look
at
this
at
this
write-up.
It
looks
very,
very
informative,
cool.
A
Let
me
see
if
there's
any
comments,
let's
see.
A
Yes,
walid,
I
think
crash
d
and
chaos
probably
is
a
good
pair
and
I
think,
while
it
also
said
this
is
a
recent
tweet
from
tim,
hawkins,
cisco,
intern,
alright,
okay,
cool
all
right,
let's
get
back
to
the
notes.
A
Yes,
I
did
it's
a
cisco
intern.
I
don't
know
if
it's
a
cisco
intern
who
wrote
it
or.
A
A
Let's
see,
if
somebody
could
yes,
it
is
from
chris
nova
awesome
and
this
is
about
the
distributed
operating
system
void,
thoughts
on
missing
parts
at
the
node
level,
oh
below
kubernetes
land.
This
sounds
very
interesting.
Let's
take
a.
A
All
right,
this
was
a
recent
one,
actually
cool.
I
will
be
typing
this
in
capitalize
english,
a
language.
A
Our
peers
are
created
into
concepts
proposed
by
andrew
wright,
reinhardt
cool,
and
this
again
looks
like
a
pretty
lengthy
paper
about
the
distributed
system
and
what's
missing
in
kubernetes
or.
A
So
I'm
not
gonna
sit
here
and
have
you
watch
me
read
the
paper
you
are
welcome
to
and
that's
why
we
have
the
show
notes.
So
we
can
share
these
links
with
you,
so
you
folks
can
can
go
out
there
and
check
out
these
these
beautiful
write-ups
that
folks
in
the
community
put
together
all
right.
Let's
see
what
else
we
have
that's
the
end
of
the
show.
Note,
let
me
go
back
to
to
the
stream
to
see.
If
I
have,
I
don't
know
how
to
get
rid
of
this.
A
All
right,
so
somebody
on
the
chat
has.
A
Has
clarified
what
what
the
what
the
write-up
is
all
about
cool
all
right
folks,
I
don't
think
there's
any
other
questions
or
any
anything
else.
I
see
on
the
chat
so
again.
This
is
my
first
time
doing
this,
so
I'm
just
gonna
go
with
the
flow.
I
think
at
this
point
it
is
a
good
time
to
get
started
and
and
introduce
crash
d,
but
before
I
do
that,
let
me
let
me
go
ahead
and
open
youtube.
A
I
can
see
what's
going
on,
because
when
I
move
away
from
my
my
streamyard
tab,
I
I
have
no
idea
what
what
what
other
folks
are
seeing
all
right,
awesome
and
yes,
I
can
see
the
writing
is
so
what's
going
on.
Here-
is
the
fact
that
I'm
on
a
large
monitor
and
and
it's
like
stretched
out
so
it
comes
out
a
little
small
on
your
side,
all
right.
A
So
we're
done
with
the
with
a
show
notes
and
like
I
said
I
don't
I
mean
I
don't
think
I
saw
any
questions
or
anything
outstanding,
so
I
am
going
to
keep
going,
I'm
going
to
keep
going
and
we
will
start
talking
about
crash
d
awesome.
A
So
crash
d,
as
I
put
in
the
show
notes
and-
and
I
am
the
lead
on
crash
t
and
crash
d-
is
a
a
project
that
started
about
a
year
and
a
half
almost
two
years
now
ago,
and
it
was
immediately
as
an
open
source
project
and
basically
the
need
was
for
us
to
have
a
way
to
allow
folks
to
extract
information
from
to
automate
the
attraction
of
information
from
a
cluster
for
the
purpose
of
diagno,
diagnosing
issues,
troubleshooting
etc.
A
So,
if
you
go
back
and
look
at
at
the,
if
you
go
back
and
look
at
the
at
the
github
repo
for
crash
d
and
go
way
back,
you'll
see
what
crash
d
used
to
look
like
and
then
about
a
year
ago,
we
decided
to
switch
from
what
it
used
to
be
more
of
a
line-by-line
kind
of
like
declarative,
more
declarative
approach.
That's
what
it
used
to
be.
But
now,
a
year
ago
we
decided
to
switch
and
adopt
the
starlark
language
and
starlark
is
a
dialect
of
python.
A
And
basically
it
allows
you
to
write
code
just
like
python,
and
you
can
execute
that
code
using
a
a
run
time
within.
Go
starlock
has
run
time
in
many
different
other
languages,
but
since
crash
d
is
written
in
go,
we
went
with
a
starlark,
go
implementation
and
today
we're
going
to
talk
about
what
that
has
allowed
us
to
do
we're
going
to
look
at
examples
of
why
a
tool
like
crash
d
is
important
in
the
supportability
of
your
cluster.
A
How
you
can
use
it
to
implement
diagnostics
type
features
in
your
in
your
kubernetes
workflow,
and
also
you
know.
If
we
have
time
we'll
we'll
talk
about
what
crashed
it
could
be
because,
as
you'll
realize,
the
fact
that
crash
these
crash
these
primaries
primary
use
today
is
automation
for
diagnostics.
But
you
know
it
could
be
something
that
we
keep
working
on
and
iterate
on
and
make
it
even
more
general
in
the
future.
A
So
we'll
see
we'll
talk
about
all
of
that,
I'm
just
taking
a
quick
look
at
the
comments
here
to
make
sure
I'm
all
right
cool.
So
we
we
talked
about
all
this.
So
let's
let's
go.
I
think
I
may
already
have
seen.
Let's
close
some.
A
A
Yes,
we'll
kind
of
blow
it
up
a
little
bit
all
right
awesome,
maybe
that's
too
big,
but
that
works.
So
this
is
the
this
is
the
website
for
or
the
repo
for
crash
d
crash
diagnostics
is
the
long
name
that
was
initially
selected
and
basically,
if
we
scroll
down,
we
see
the
different
features
that
that
that
are
part
of
crash
d
and
basically,
as
we'll
see
in
a
minute
as
we
start
going
through
some
of
the
the
the
actual
scripting.
A
You
know
it's
it's
it's
like.
I
said
it's
it's
based
on
starlark.
It
allows
you
to
automate
interaction
between
your
infrastructure,
that's
running
kubernetes!
A
You
can
do
interaction
via
ssh
to
target
your
compute
resources,
or
we
can
also
use
constructs
that
talks
to
the
api
server
to
extract
information
from
the
from
the
cluster
as
well,
and
there's
also
support
for
cluster
api.
I
don't
know
if
I
may
or
may
not
have
time
to
talk
about
that,
but
we'll
we'll
see
and
how
does
it
work?
Well?
A
Crafty
is
a
single
it's
a
single
binary,
basically,
once
you,
when
you
build
it,
it
gives
you
a
single
binary,
and
basically
you
just
point
that
binary
at
a
crash
d,
script
or
starlark
script
and
it'll
just
go
ahead
and
execute
everything
in
that
script.
All
right,
so
we
won't
go
over
the
installation,
but
what's
in
that
script
well,
this
is
what
the
the
script
looks
like
and
we'll
go
back
and
actually
work
on
some
on
some
example.
A
So
you
can
see
how
how
it
works,
but
just
that
at
a
high
level.
This
is
what
the
the
scripting
for
crash
d
looks
like
there
are,
you
know
several
different
constructs.
One
type
of
construct
is
for
configuration,
and
this
is
what
this
is
doing
here-
we're
creating
a
configuration.
Basically,
the
configuration
is
something
that
collects
settings
for
something
in
this
case.
A
The
setting
is
for
crash
d
itself,
and
here
we're
doing
we're
also
using
a
different
type
of
construct,
actually
there's
several
type
of
constructs
going
on
here
and
we'll
disassemble
that
when
we
look
at
an
actual
example
a
little
bit
later,
but
we
see
that
we
have
resources
and
we
have
a
provider
and
then
the
pro,
so
the
resource
takes
a
provider
and
that's
what
we're
doing
here
and
then
the
provider
can
take
a
whole
list
of
hosts
and
it
takes
a
what
we
call
a
another
configuration
and
that's
what
we're
doing
here
again,
we'll
disassemble
that
in
a
little
bit
a
little
bit
later,
and
once
you
have
once
you
have
your
your
resource
in
this
instance,
say
a
what
the
provider
does.
A
It
allows
you
to
enumerate
your
your
resources
and
your
in
this
instance.
The
resource
is
basically
compute
resources
and
it
enumerates
those
compute
resources
so
that
we
can
apply
operations
on
those
resources.
So
that
is
at
a
high
level.
That's
pretty
much!
How
crash
d
works
and
does
everything.
So
you
have
something
that
can
enumerate
resources,
and
then
you
have
well.
You
have
configuration,
you
have
resource
enumeration
and
you
have
operation
that
can
be
applied
to
those
resources.
A
So
now
that
at
this
point
we
have
our
resources
enumerated
when
we
store
them
in
hosts
and
then
now
we
can
do
those
operation
and
apply
those
operations
against
the
resources
that
were
enumerated.
A
So
crash
d
comes
with
many
well
several,
I
should
say
not
it's
a
few
operations
that
you
can
apply
to
your
resources
and
we'll
look
at
that
with
those
what
those
operations
are
here,
we're
using
a
function
called
capture
and
basically
what
capture
does
it
allows
you
to
execute
a
function
on
a
remote
compute
resource
and
then
captures
the
output
of
that
command?
A
So
here
we
have
several
capture
command
and
what's
happening
is
the
command
that's
specified
here
is
being
executed
against
each
enumerated
resource
compute
resource,
so
for
each
capture
command
that
you
see
here,
it
goes
through
the
number
of
of
machines
that
were
enumerated
and
attempt
to
execute
that
command
against
that
that
machine,
one
by
one
now
crash
d,
I'll
just
throw
that
as
a
side
note
crash
d
is,
is
fairly
nascent
project.
A
A
But
one
can
imagine
a
future
enhancement
would
be
to
write.
You
know
to
update
crazy
to
where
each
machine
gets
executed
its
own
in
in
its
own
execution
thread
and
then
gather
the
the
result
after
each
after
each
command
gets
executed
anyway.
That
was
a
little
side
note
and
then
toward
the
end.
Well,
they're.
One
of
the
functions
that
we
have
in
crashed
is
something
called
an
archive,
and
basically
what
it
allows
you
to
do
is
to
specify.
A
A
Some
sort
of
result
like,
for
instance,
capture
and
as
we
capture
these
results,
they're
stored
locally,
and
what
we
can
do
is
use
archive
to
basically
grab
these
local
resources
and
bundle
that
into
like
a
tar
into
a
tar,
gz
file.
Okay,
so
at
a
super
high
level,
hopefully
that
kind
of
gives
you
an
idea.
What
crash
d
is
at
a
super
high
level?
A
Let
me
check
the
let
me
check
to
see
if
we
have
any
kind
of
questions
all
right,
joe.
I
think
that
her
thinking
crashed.
He
might
be
related
too.
Let's
see.
A
What
if
the
api
server
is
the
one
having
issues?
Okay,
I'm
guessing.
This
is
probably
a
question
for
me.
Yes,
I
mean
if,
if
that,
if
that
was
directed
for
what
I
was
talking
about,
yes,
you
can
talk
to
an
api
server
and
we'll
we'll
go
through.
A
Oh
okay.
Thank
you.
Wally
just
clarified
that
khasi
has
to
do
with
chris
nova's
post.
Do
you
set
crafty
in
proactive
mode?
This
is
from
waleed
or
is
it
after
an
issue?
Oh
that's
a
very
interesting
question.
So
do
you
basically
the
question
is:
do
you
sit
crash
in
a
proactive
mode
where
it's
watching
your
your
cluster
or
your
infrastructure?
Or
do
you
react
post
a
problem
right
now?
The
main
use
is
when
actually
one
of
the
driving
reasons
why
we
we
started
the
project.
Is
we
wanted?
A
We
wanted
a
tool
to
help
us
diagnose
clusters
as
we
they
were
being
created
and
folks
run
into
issues
and
then,
and
we
wanted
something
that
helps
us
debug.
Whatever
those
issues
are
so
today
the
primary
usage
model
of
crash
d
is
after
something
has
happened.
Then
you
go
against
the
cluster
and
you
do
you
extract
information?
So
you
can
do
your
analysis
but
because,
as
you
could
actually
I
don't
think
I've
shown
how
you
run
crash
d.
So
let's
do
that
here.
This
is
how
you
run
crash
d.
A
It's
a
simple
binary
one
command
and
you
pass
it
the
the
script.
So
I'm
showing
that's
to
say
you
could
wrap
crash
d
in
any
kind
of
deployment
deployment
model
that
you
choose
or
you
could
take
crash
d,
throw
it
against
a
cluster
itself
now.
Obviously,
if
that
cluster
goes
down,
you
know
you
want
to
be
outside
the
cluster
to
be
able
to
diagnose
it.
But
if
you
want
crash
it
to
be,
nothing
could
stop
gradually
from
running
within
a
cluster
and
you
progra.
A
You
write
your
script
in
such
a
way
where
it
intermittently
go
through
and
do
things
and
collect
data
and
and
and
generate
that
the
report
that
we'll
see
that
gets
generated
at
the
end
so
and
we
you
know,
there's
a
lot
of
usage
ideas
that
can
come
out
of
crash
tea
and
you
know
because
one
of
the
things
we
wanted
to
do
is
keep
it
simple,
keep
them
the
way
that
you
use
crash
d,
very
simplistic,
even
as
a
consumable
programmatic
api.
A
I
wanted
to
make
sure
that
it
is
very
super
simple
to
consume
crash
d
and
keep
it
super
go
gettable
to
where
you
can
include
it
in
your
own,
where
you're
included
in
your
own
tooling.
A
So
we
want
to
make
sure
that
it
is
super
useful,
super
usable
in
in
different
contexts
that
we
can
even
imagine
right
now
so
long
answer
to
a
simple
question,
but
yeah
right
now,
it's
it's
more
reactionary
than
its
pro
pro
action
there.
All
right.
Let's
see,
if
there's
any
more
question.
A
This
kind
of
look
like
apple
system
diagnostics-
maybe
you
mentioned
that
you're
getting
your
event
from
the
api
server
or
capi.
A
Yeah
you
you
can
get.
You
can
extract
information
from
the
api
server.
I
haven't
gotten
to
that.
I
haven't
shown
an
example
of
that,
but
it'll
come
later.
Just
we'll
see
we'll
see
that
in
a
few
minutes.
A
So
conware
ask:
is
it
collecting
data
using
api
server
or
is
it
ls?
Is
it
getting
information
by
running
native
os
command?
So
it's
doing
both
it
can
do
both
we
can
there's
a
set
of
commands.
Actually,
let's
do
this.
A
Let
me
go
to
where
it
is:
let's
go
to
the
docs
real
quick,
because
there's
a
lot
of
question
about
usage
all
right,
so
there's
a
doc
that
I
try
to
keep
with.
Basically,
it's
like
a
reference
to
all
the
commands
and
everything
else
that
that's
including
crash
d
scripting.
A
A
Again
you
we
have
different
examples
in
the
in
the
in
the
doc,
but
let's
here
here
are
the
let's
see
crash
the
script
file,
so
this
is
talking
about
what
the
script
file
does
is,
so,
let's
actually
before
I
dive
into
no.
Actually
this
one
only
do
os
commands
eventually
we'll
get
to
the
one
that
does
we'll
get
to
one
that
does
both,
but
this
basically
is
showing
you
an
example
of
how
you
can
create
your
own
functions,
because
again
it
is
a
python
dialect.
A
One
of
the
things
you
can
do
is
create
your
own
function
to
basically
localize
your
own
logic
and
then
reuse
that
logic
as
you
see
fit.
So
this
guy
is
basically
it's
doing
the
enumeration
of
of
machines
using
the
host
file.
This
is
just
a
an
example.
A
And
basically,
it
uses
the
hosts
file,
use
grep
and
run
use
the
run
local
to
basically,
as
the
name
implies,
runs
the
command
locally
grab.
The
result
use
use
string
that
split
lines,
which
is
a
built-in
functions
in
starlark,
and
then
we
take
that
and
uses
that
as
for
our
provider,
so
that
we
can
do
our
own
enumeration
of
of
resources
and
then
we
can
use
that
to
connect
to
it.
So
this
is
just
to
show
you
an
example
of
what
what's
possible,
but
anyway,
let's
keep
going.
A
We
have
so
we
have
configuration
function,
man.
This
is
kind
of
big,
so
it's
going
to
take
forever
to
scroll
down.
Let
me
kind
of
move
it
down
a
little
bit.
Okay,
maybe
too
small,
all
right,
let's
see
wait
there,
you
go
okay,
so
so
this
one
is.
These
are
the
configuration
functions
I
had
mentioned
earlier,
so
you
have
crash
config,
which
basically
allows
you
to
configure
crash
decon
crash
d
itself.
A
We
won't
stay
and
read
everything,
but
we
just
I
just
wanted
to
show
you.
What's
there
we
have
cube
config.
So
for
all
the
questions
about,
can
you
talk
to
the
api
server?
Yes,
you
can
talk
to
the
api
server
cubeconfig.
Allow
you
to
configure
when
you
use
path
to
tell
it
where
your
cubeconfig
file
is
located
and
if
you
use
cappy,
you
could
provided
a
cappy
provider
and
we'll
see
what
a
cappy
provider
is
later
down
down.
A
This
document,
okay
and
here's
an
example
of
what
a
keep
config
looks
like
args.
I
haven't
talked
about
argument
passing,
but
you
can
pass
arguments
to
your
script
and
reference
it
in
this
manner.
Ssh
config
we've
seen
example
of
ssh
config,
where
you
can
configure
ssh
and
then
use
it
later
in
other
commands.
A
Let's
see
here's
an
example
of
an
ssh
config
we've
seen
that
before
now
provider
functions
and
that's
what
I
had
mentioned
earlier,
what
is
a
provider
is
something
that
allows
you
to
do
enumeration
of
your
infrastructure
resources
right
now.
It's
compute
resources,
but
in
the
future
it
could
be
more
than
that.
So
we
have
a
kappa
provider
which
knows
about
kappa,
which
is
the
cluster
api.
A
Kappa
is
the
cluster
api
provider
for
aws
and
it
knows
how
to
enumerate
machines
when
those
machines
are
running
within
a
cluster
api
initiated
cluster
running
on
aws.
Okay.
So
that's
that,
and
here
is
here's
an
example
of
how
that
is
used.
So
what
you
see
here
is
we
declare
our
cap?
Well,
actually,
we
declare
ssh
config.
A
So
if
you
have
keys,
if
you
want
to
talk
to
machines
that
are
part
of
your
cap
cluster
and
you
want
to
do-
you
want
to
run
commands
directly
on
those
machine
to
extract
information
and
because
that
will
be
done
over
ssh
first,
you
have
to
declare
your
ssh
config.
A
We
specify
a
cube
config,
because
we
need
the
cube
config
path
and
then
we
spread
we,
we
declare
the
kappa
provider
now,
once
you
have
again
as
we've
seen
earlier,
once
you
have
a
provider,
the
provider
can
be
plopped
into
a
resource
and
then
that
resource
can
enumerate
whatever
the
provider
has
to
offer,
and
then
we
have
a
cap.
V.
Cap
v
is
for
clusters
that
are
hosted
within
vsphere.
A
A
It's
basically
something
that,
as
a
as
a
script
author,
I
I
can
explicitly
provide
the
host
in
the
code
or,
as
we
saw
earlier
in
this
in
the
example
earlier,
you
can
write
code
that
generate
that
list.
For
you
so
host
list
provider
is
the
one
of
the
simplest
provider
and
here's
an
example
here
where
we
specify
the
hosts
explicitly
and
then
once
we
have
once
we
can
enumerate
our
compute
resource,
then
we
can
apply
operations
on
those
resources.
A
As
the
name
implied,
the
cube
node
provider
is
a
generic
kubernetes
provider
and
what
it
allows
us
to
do
is
to
figure
out
node
information
using
kubernetes
and
using
the
the
cube
config
that
we
have
so
what
it
does
it
talks
to
the
api
server
enumerate
the
list
of
nodes
and
take
that
and
parses
it
and
makes
the
ip
address
so
the
node
name
available,
so
that
we
can
talk
to
those
nodes
and
apply
and
apply
some
kind
of
command
against
those
nodes.
A
All
right,
let's
keep
going
real,
quick
and
then
resources
is
what
takes
all
your
providers
and
enumerate
the
the
the
the
resources
so
that
we
can
apply
for
a
function
or
a
method.
So
this
is
an
example
of
all
three
types
working
together,
so
we
have
an
ssh
config.
A
Now
we
have
the
resource,
we
have
the
whole
system
we've
seen
this.
This
is
the
same
example
we
saw
earlier,
but
the
way
resources
and
providers
work
works
similarly
for
all
the
other
type
of
providers,
so
the
same
model
implied.
You
would
do
something
very
similar
for
other
types
of
providers
and
then,
once
you
have
your
resources,
then
you
can
apply
an
operation
against
that
resource.
Hopefully
that
makes
sense.
A
A
Crew,
hey
ymo
from
canada,
let's
see
red
hat,
open
shift
uses
must
gather,
it
runs
on
a
node
and
I
guess
okay
yeah.
I
would
imagine
there
are
other
similar
tools
out
there,
but
one
of
the
things
we
wanted
to
do
with
crash
da2
is
to
make
sure
that
if,
if
you
do
not
have
a
cluster
operational,
you
could
still
run
crash
d
and
it
can
still
be
useful
because
a
lot
of
times,
sometimes
when
folks
are
setting
up
especially
day
zero
or
pre-day
zero.
A
A
So
what
you
can
do
is
use
scratch
t
to
talk
to
to
get
os
level
information
even
before
your
cluster
is
fully
formed.
Let's
see.
A
Yes,
darwin,
yes,
darling,
linux
binary
are
available
all
right,
joe
clarifies
something
yeah.
The
idea
is
to
have
a
flexible
super
simple
weight
yep.
Why
is
ssh
config
needed
when
we
are
giving
keep
config,
so
the
we
use
so
in
in
keep
config
the
reason
why
we
need
ssh
config
is
sometimes
the.
A
A
You
know,
explicitly
specified
information
and
where
we
need
the
ssh
config,
because
we
use
that
information
to
ssh
onto
the
node
and
do
os
level
operations
like
we've
seen
like,
for
instance,
run
a
command
on
that
machine
and
before
we
get
to
that
point,
we
need
information.
So
we
can
so
it's
two
ways
you
can
get
that
machine
information.
You
could
explicitly
specify
that
as
we
see
as
we're
seeing
here
on
on
the
screen
or
if
you
already
have
a
cluster,
that's
so
fully
formed.
A
But
you
want
to
get
that
information
from
the
cluster,
and
this
is
very
important
in
the
case
of
like
a
cluster
api,
where
you
can
have
a
guest
cluster,
for
instance,
that's
being
hosted
by
a
management
cluster,
some
that
was
deployed
by
a
management
cluster.
You
could
query
the
management
cluster
for
machine
information
about
the
guest
cluster
and
grab
that
information
from
the
api
server
and
then
still
be
able
to
do
os
level
operation
against
those
compute
resources
that
are
hosting
the
the
guest
cluster.
A
Hopefully
that
made
sense,
I'm
going
to
look
at
the
questions
to
see
if
there
are
any
more
questions
like
that,
is
it
capable
of
parsing
the
output
of
cmd
or
just
capture
in
a
file.
That's
a
very
interesting
question.
Yes,
so
it's
two
commands
that
we
have.
We
have
so
as
you.
Hopefully
you
can
see.
So
we
have
run
the
run
command
and
we'll
talk
about.
Actually,
we
write
here
in
command
function.
A
We
could
talk
about
the
command
functions
and
the
command
functions
are
what
we
use
to
actually
apply
commands
either
locally
on
the
machine
where
we're
running
crash
d
or
remotely
on
the
compute
nodes
or
compute
resources
that
we're
interested
in
and
yes,
so
when
you
do
run
run,
will
run
against
your
your
machine
and
actually
return
a
it'll
return.
A
Whatever
the
response
was,
and
then
you
are
at
the
you
know
at
liberty
to
do
whatever
you
want
with
that
with
that
result,
so
that
result
will
come
back
as
a
string
and
we'll
see
an
actual
example
of
that,
but
capture
skips
that
step
and
automatically
grab
that
grab
the
result
and
stuck
it
in
a
file
for
you
and
by
default
the
file
will
be
named
based
on
the
the
command,
but
you
can
specify
an
actual
file
name
if
you
want,
and
so
let's
go
through
real
quick
through
the
command
functions
and
then
we're
gonna
look
at
actual,
because
I'm
kind
of
paying
attention
to
the
time
to
make
sure
we
don't
run
too
long
and
then
we'll
take
a
look
at
actual
scripts
and
work
through
some
some
issues
work
through
those
scripts
so
archive.
A
We
see
what
archives
we've
already
seen.
What
archive
does
it?
Basically,
it's
a
it's
a
function
that
runs
locally
and
it
takes
a
group
of
directory
or
files
and
bundle
that
into
a
tar
file.
Capture
we've
already
talked
about
capture
capture.
Does
exactly
what
it
says:
it'll
run
a
command
on
a
remote
machine
and
captures
the
result
into
automatically
into
a
file,
and
you
can
see
that
you
could
specify
a
file
name,
but
by
default,
it'll
use
a
string
for
the
command
to
to
name
the
file.
A
What
else
we
got?
Actually
here's
an
example:
we've
it's
basically
the
same
kind
of
example:
we've
seen.
Let's
keep
going
so
we
have
capture
local.
So
this
is
something
we
added
later
capture.
Local
does
exactly
what
it
sounds
like.
I
didn't
want
to
overload
the
meaning
of
capture,
so
I
created
a
new
function
that
clearly
states
what
it's
doing
so
it
cap.
A
It
basically
runs
locally
and
captures
information
from
a
command
that
you
that
that
is
executed
locally
and
put
that
into
a
file
and,
let's
see,
and
then
we
have
copy
from
copy
from
exactly
what
what
it
sounds
like
it
allows
you
to
copy
a
file
from
the
remote
compute
resource
onto
your
local
machine,
and
here
we
have
run
so
run,
is
what
you
use
to
run
command
against
your
remote
machine
and
it
does
the
command
and
return
a
string
as
a
result,
and
this
is
showing
you
this
silly
example
of
running.
A
I
think
it's
running
up
the
is
it
it.
Oh,
the
uptime
come
in
on
the
on
the
remote
machine,
and
then
we
have
run
local,
which
basically
does
exactly
the
same,
but
on
the
local
machine
all
right,
and
then
this
is
what
I
really
wanted
to
get
to
the
kubernetes
functions.
A
So
you
have
cube
capture
cube
so
before
I
even
get
into
the
kubernetes
function
today,
the
most
of
actually,
I
think,
all
of
them.
What
they
do.
Is
they
query
the
api
server
and
you
get
a
result
back.
A
A
Okay,
I
will
highlight
the
output
and
then
so
that's
pretty
much
how
the
cube
underscore
commands.
That's
that's
how
they
work.
There's
a
cube
capture,
there's
also
a
cube
get,
but,
let's
see
so
cube
capture,
allows
you
to
query
the
api
server
and
capture
information
from
the
api
server.
A
Let's
look
at
an
example,
so
we
can
see
so
here.
So
we
declare
keep
config.
We
create
a
name
namespace
list
of
what
we're
interested
in
and
then
we
say,
keep
capture
now
the
what
the
what
a
parameter
the
web,
what
parameter
basically
specify?
What
is
it
that
you
want
to
capture
and
then
name
space
and
then
keep
config
and
that's
pretty
much
it
so
you
could
capture
logs
or
objects.
A
Let's
see
what
else
I
can
show
real,
quick,
oh
well!
We
we
also
have
the
notion
of
default
values
and
we
have
something
called.
A
Okay,
so
we
have
something
called
set
default.
There
are
different
type
of
configuration
that
you
can
set
as
default
and
we
use
the
set
default
function
to
do
that
once
it's
set
as
default,
you
don't
have
to
necessarily
specify
over
and
over
again-
and
this
example
is
showing
that
being
done
with
with
resources.
Here
once
we
said
that
we
see
that
we
set
in
that
as
a
as
a
default,
and
then
we
don't
have
to
down
here.
A
We
don't
have
to
specify
that
it's,
it's
a
nicety,
a
little
bit
of
magic,
but
it's
there
to
to
allow
you
to
to
make
your
script
look
as
simply
as
possible,
because
the
scripts
can
get
big.
Okay,
we
have
os
constructs
where
we,
so
we
have
access
to
os.name,
which
returned
the
name
of
the
os
os
username
os
home,
and
you
can
also
extract
out
local
get
a
local
environment
arguments.
A
A
I'm
gonna
look
at
the
comments
to
see
if
there's
anything
going
on
and
then
when
we
come
back
from
the
comment,
because
we're
already
rolling
into
the
one
hour
mark,
then
we're
going
to
start
looking
at
actual
scripts
and
run
them
locally
on
my
machine
and
and
debug
stuff,
as
we
as
we
go
through
it.
Okay,
so
it
would
be
nice
if
it
does
pre-flight
check,
after
updates,
upgrade
for
validation
check.
A
All
right,
let's
see,
runs.
A
Already
as
modules
and
can
interact
with
the
kubernetes
api
yeah,
I
mean
that's
a
valid
question.
Why
not
ansible?
I
think
this
is
super
lightweight.
At
least
that's
one
of
my
argument
is
it's
lightweight?
It's
programmable,
it's
consumable
as
an
api.
That's
another
thing:
you
can
wrap
it
in
different
mode
of
deployment.
I
could
put
everything
in
crash
d
inside
of
of
a
docker
file
or
run
it
directly
locally.
A
There's
no
need
for
anything
else,
other
than
the
crash
de
binary,
no
other
pieces
piece,
there's
no
agent.
That
needs
to
be
deployed.
I'm
pretty
sure.
If
I
could,
if
I
sit
here
and
kind
of
think
about
it,
I
I
can
come
up
with
a
more
more
reason,
but
you
know,
obviously,
if
you're
in
a
shop
where
ansible
works
for
you,
then
obviously
that's
that's
what
you
should
use,
but
crash
d,
I
think,
adds
another
dimension
to
the
you
know
to
the
conversation
of
supportability.
A
All
right.
Let's
see,
if
you
have
not
already,
can
you
talk
about
how
you
can
debug
crash
d
itself.
A
A
I
know
it's
cheating.
I
turn
on
the
python,
the
the
python
plug-in
and
golem
to
make
sure
that
it
gives
me
color,
highlighting
that's
one
thing:
two.
Yes,
you
can
do
you
know
you
can
put
print
statement
all
over
the
place
or
print
function
in
your
script
to
get
to
kind
of
output
things
as
as
your
script
is
running.
You
certainly
can
do
that
as
far
as
debug
in
the
traditional
sense.
A
I
don't
think
there's
anything
like
that
where
you
can
step
through-
or
at
least
I
haven't
seen
it.
I
haven't
looked
at
the
starlark
project
in
a
while
to
see
what
things
they've
they've
included
lately,
but
I
don't
think
they
have
anything
where
you
can
do
any
step
through
debugging,
but
that
would
be
an
interesting
thing
to
look
into
to.
A
Actually,
you
know,
debug
start
a
start,
your
debugger
from
go
and
then
have
it
call
a
call,
a
script
file
and
then
continue
that
debugging,
as
things
are
getting
executed
by
the
interpreter
inside,
go
there's
nothing
like
that
that
I
know
of
so
so
yeah
other
ways
you
can
debug
yeah.
Definitely
you
know
print
statement
is
a
big
one.
Oh,
what
I
was
gonna
add
is
the
start
like
the
star.
A
A
But
again,
if
you
know
and
that's
where
the
the
print
statements
help
you
can
analyze
your
your
your
script
with
print
statements
and
kind
of
narrow
down
where,
where
your
errors
are
happening,
all
right,
let's
see,
if
there's
anything
else,.
A
Yes,
yeah
yeah,
yeah,
okay,
yes,
I
am
hopefully
we'll
get
to
the
point
where
we
can
run
things
in
parallel
and
make
and
make
it
so
actually
one
thing
I
should
say:
crash
d
is
open
source
and
it's
it's
available
under
the
tonzu
open
source
banner.
But
it
is
you
know
it's
it's
it's
not
something
that
I'm
doing
for
fun.
It's
actually
something
that
we
use
a
lot
internally.
It's
part
of
cicd
pipelines,
it's
used
in
a
lot
of
it's.
A
Oh,
it's
also
used
a
lot
with
with
subtle
boy
son
of
boy
which
I'm
also
the
lead
of,
and
folks
use
those
two
tools
in
different
contexts,
but
a
lot
of
time,
you'll
see
especially
in
our
build
pipelines.
You'll
see
folks
use
sonar
sonoboy
to
kick
off
to
kick
off
some
kind
of
conformance
test
and
then
at
the
end
of
the
pipeline.
A
If
you
know,
if
there's
anything
funky
happen,
they'll
also
kick
off
crash
d
to
go
around
and
collect
information
in
the
pipeline.
We
also
use
crash
d
as
part
of
a
pri
of
our
product
as
well.
So
it
you
know
it's
something
that
we're.
You
know
definitely
serious
about,
and-
and
it's
not
like-
I
said
it's
not
something
that
it
is.
It's
definitely
a
fun
project
to
work
on,
but
it
has
some
serious,
serious
usage
and,
as
part
of
the
tunzu
portfolio,
all
right.
Okay.
A
All
right,
so
now
what
we're
gonna
do
is.
Let's
look
at
at
some
examples.
Where
are
my
examples
all
right,
let
me
make
sure
that
you
all
can
see
my
screen.
A
A
All
this
real
estate,
we
should
be
able
to
see
multiple
things
at
the
same
time:
okay,
so
the
first,
let's,
let's
look
at
some
examples
and
I
kind
of
highlighted
these
examples
in
the
in
the
show
notes,
but
we
I
don't
know
how
closely
I'm
gonna
follow
the
show
notes.
Let's
see,
let's
see
what
happens.
Okay,
the
first
example:
let's
look
at
this
example
and
we'll
go
over
it
and
hopefully
it'll
make
sense
and
we'll
try
to
run
it
so
locally.
I
have
a.
A
A
All
right,
so
I
still
have
my
mini
cube
cluster
running
and
it
has
three
nodes.
Let's
see
yeah,
it
has
three
nodes.
It's
just
a
plane
cause,
there's
not
a
whole
lot
running
it
on
it.
A
A
It's
called
k-top,
it's
it's
also
it's
it's
under
my
personal
github,
but
it's
something
that
I
rarely
have
time
to
work
on,
but
when
I
do
it
it
it,
you
know,
I
add,
I
add,
features
very
slowly,
but
and
basically
what
it
is
is
just
k-top,
as
the
name
suggests
it's
top
for
kubernetes,
and
here
we're
looking
at
my
nodes
and
you
could
see
so
how
rough
around
the
edges
is.
A
K-Top,
look
and
see
that
this
this
whole
panel
empty
nothing
in
it,
but
it
does
the
job
for
today.
So
it
shows
me
that
I
have
three.
It
shows
me
that
I
have
three
three
nodes
and
there's
some
there's
some
parchment.
Actually,
this
is
from
cube
system
all
right,
so
we
have
a
kubernetes
system,
kubernetes
cluster
running.
Let's
look
at
this
this
script,
all
right,
so
we
have
ssh.
A
A
I
had
a
longer
example,
but
I
had
to
basically
shorten
it
to
only
talk
to
one
node,
because
the
ssh
config-
I
can
only
excuse
me,
I
can
only
put
one
I
can
only
put
one
key
in
there
and
so
right
now,
I'm
only
using
it
to
to
talk
to
one
one
machine
but
anyway,
so
we
have.
A
A
Okay,
so
yeah
there
you
go
so
we
have
the
provider
as
we've
seen.
This
is
the
the
list
provider
with
its
ip
address
and
then
the
ssh
config
is
being
referenced
here
anyway.
A
We've
seen
this
example
before
so,
let's
go
ahead
and
what
I'm
doing
is
run
this
command
on
the
on
on
on
this
resource-
and
I
think
I
forgot
who
asked
the
question,
but
you
know
I
could
have
done
here,
I
could
have
captured
the
result
and
save
it
in
a
file,
but
I'm
I'm
running,
I'm
using
the
run
function
to
execute
my
command,
and
then
I
do
what
I
you
know
and
then
I
can
reuse
the
result.
A
A
All
right,
so
this
example
won't
do
much
other
than
talk
to
the
first.
I
think
it's
the
control,
plane,
node
and
grab
the
uptime
information
from
it.
So
let's
go
ahead
and
give
it
a
try.
So
actually
it's
because
the
crash,
the
binaries
up
in
the
previous
directory
and
then
run
and
then
the
name
of
the
and
we'll
put
a
debug
debug
here.
So
we
can
see
what's
going
on
all
right,
so
here
it
is.
A
That
was
the
crash
d
running
the
script,
and
this
is
the
print
state,
the
print
statement
that
we
have
at
the
bottom.
That's
what
it's
doing
now.
I
wanted
to
show
you
this.
Oh
okay,
yeah.
I
wanted
to
show
you
this.
So
if
we
so
uptime,
what
could
what
run
returns
is
a
what
we
call
a
starlark
structure.
A
Actually,
if
you
have
more
than
one
machine
in
that
list,
it'll
return
you
a
an
array
of
structs,
but
since
it's
only
one
it's
going
to
return
you
one
strike.
It
might
be
a
behavior
that
I
changed
later,
but
right
now,
that's
what
it
is.
I
thought
I
was
being
slick
and
helpful,
but
when
I
thought
about
it
and
I
kind
of
got
confused
earlier,
I'm
like
wait
a
minute.
I
thought
this
was.
A
This
was
a
an
array
and
I
was
like
oh
yeah.
I
remember
I
was
being
helpful
with
a
feature
that
no
one
asked
for
and
yeah,
and
that
was
throwing
me
off
so
hopefully,
when
I
get
around
to
it,
everything
will
be
uniform.
You'll
always
get
you'll,
always
get
a
an
array.
So
if
you
print
up
time,
I
just
want
to
show
that
real,
quick
you'll
see
what
all
comes
back.
A
So
here,
it's
printing,
the
actual
struct,
it's
called
command
result,
and
it
has
an
error
if
you
want
to
check
that
it
has
the
resource
that
it
talked
to
and
the
actual
result
as
a
string
that
came
from
from
the
result
of
the
command
all
right.
Okay.
So
let
me
check
questions
before
I
move
to
the
next
example.
A
Okay
yeah,
so
there's
we're
still
talking
about
ansible.
A
A
All
right
so
looks
like
there's
no
question
directly
about
what
we
just
looked
at
so
we'll
look
at
another
example.
A
Let's
look
at
example,
two
and
pretty
much
most
of
these.
We
we
kind
of
covered
before,
but
I
just
wanted
to
show
them
running
all
right.
Let's
look
at
this
guy
okay,
so
we
all
thought
we
already
talked
about
pretty
much
everything
in
here.
So
one
of
the
thing
well,
what
this
is
doing.
What
this
example
is
doing?
It's
using
cube
capture
to
talk
to
the
api
server
to
pull
logs
from
from
this
namespace
cube
system
and
we'll
see
how
it
does
that.
A
So
we
configure
crash
d
we
set
where
we
have
the
notion
of
a
working
directory,
so
we
set
that
as
being
directly
at
the
location
where
we're
running
the
script.
A
We
set
our
cue
config
which
to
point
to
the
default
config
location,
so
you
can
imagine
that
being
anywhere
and
then
we
configure
cube
capture,
saying
we
want
to
pull
logs
from
our
api
server
and
then
once
we're
done,
we
wrap
everything
and
they
we
wrap
everything
in
in
a
in
a
guitar
file
all
right.
So,
let's,
let's
run
this
real
quick.
Let's
see
what.
A
A
Let
me
make
sure
that
I'm
not
too
small
in
the
front
yeah.
I
think
it's,
okay,
all
right,
so
here's
the
work
directory
that
it
had
created
as
as
it's
collecting
information-
and
this
is
the
the
tar
file
that
it
generated.
So
if
you
look
in
the
working
directory,
actually,
let's
do
it
from
here,
so
it'll
be
quicker.
A
If
you
look
in
the
work,
dir
you'll
see
what
what
came
back
and
basically
it
does
something
similar
to
what
cube
ctl
does
with
the
cluster
info
done.
I
remember
the
the
name
of
the
command
exactly,
but
basically
it
arranged
the
result
and
and
directory
names
that
matches
the
resource
where
the
result
is
from.
A
So
we
targeted
the
cube
system
of
the
cube
system
namespace,
so
it
creates
a
directory
cube
system
and
then
inside
there
all
objects
that
you
that
we
are
queried
are
captured
here.
So,
for
instance,
here
we
see,
I
don't
know
some
of
yep
there
goes,
so
this
is
the
log
from
that
particular
pod
and
you
can
see
logs
from
other
pods
etc.
So
that's
how
that
works,
so
all
that
information
is
basically
wrapped
up
into
the
the
tar
file
that
that
was
generated
all
right.
A
So
that
was
that
again,
that's
something
we've
already
seen.
So,
let's,
let's
go
somewhere
to
something
we
haven't
talked
about.
Let's
see,
I
think
capture
this
one.
A
It's
so
this
particular
example.
Let
me
close
this:
let's
go
real
quick
over
it,
but
it's
basically
doing
the
exact
same
thing
that
all
the
other
examples
of
are
doing.
We
set
up
some
configurations.
A
We
set
up
a
provider
resources
ssh
config
now,
basically,
what
I'm
doing
here
is
embedding
the
definition
for
the
configuration
directly
ins
as
as
part
of
the
as
part
of
the
parameter
for
the
enclosing
function,
so
here
we're
putting
ssh
config
inside
the
the
hostless
provider
and
then
we're
putting
hostess
provider
inside
resources
and
everything
is
just
enclosed
in.
A
But
it's
basically
the
same
thing:
we've
done
before
we're
declaring
an
ssh,
config
some
kind
of
provider
and
then
pass
that
to
to
our
resources,
and
then
once
that's
done,
we
we
execute
capture
command
against
it
and
archive
everything
that
was
captured.
A
Let's
run
this
all
right
and
you'll
see
some
warning
because
it
probably
was
unable
to
actually
some
errors
too,
because
I
think
it
tried
to
talk
to
something
it
couldn't
talk
to
one
of
the
machines,
I
believe
so
anyway.
So
here
is
the
result.
So,
oh
yeah!
This
is
what
I
wanted
to
show
the
result
in
how
it's
captured.
A
Let's
see,
if
we
look
here
so
remember,
I
was
saying
how,
by
default,
the
capture
command
will
create
files
that
uses
the
name
of
the
command,
as
as
the
name
of
the
command
or
the
string
for
the
command
as
the
name
for
the
file.
So
that's
what's
being
done
here,
but
you
can
override
that
that
behavior,
by
specifying
an
actual
command
or
an
actual
file
name,
so
you
can
come
here
and
say:
file
name
equals.
A
I
don't
know:
sudo
dot,
txt
or
actually
df,
that's
txt,
and
if
you
run
this
again
the
and
obviously
put
in
a
comma-
and
if
you
run
this
again,
you
will
get
the
captcha
will
now
use
file
name
df
to
to
generate
the
to
generate
the
fine
all
right.
A
All
right,
what
else
can
you
pull
other
than
logs
any
resource
event?
Yes,
so
that
was
from
walid.
Yes,
you
can
pull
just
so
the
code
behind
cube
capture
and
cube.
Get
we
we're
not
to
get
into
cube,
get
but
cube.
Get
is
basically
cube
capture
without
it
basically
returns
a
structure,
an
in-memory
structure
that
you
can
use
in
your
script,
but
both
of
them
use
the
same
code
internally,
it's
the
same
go
code
and
what
that
allows
you
to
do
is
basically
query
any
resource.
A
That's
inside
the
the
api
server
as
long
as
you
provide
it
with
you
know,
resource
name,
path,
etc,
and
then
it'll
figure
out
how
to
get
to
that
resource
and
return
either
the
the
the
full
resource
done
for
you.
Yes,
let's
see
star
lark.
A
Skylark
look
long
time
ago:
okay,
did
you
picture
a
centralized
library
of
capture
script
for
diverse
type
of
scenario,
eg
check,
network
yeah
yeah?
Actually,
that's
a
good
question,
volley
yeah.
It
would
be
nice
eventually
to
have
pre-bait
and
tested.
A
I
guess
scripts
that
can
be
reused.
One
of
the
things
I
haven't
done,
because
you
you
can
do
it
in
starlark,
but
I
haven't
done
it
in
as
part
of
crash
descript.
Is
the
ability
to
to
actually
create
modules
in
starlark
and
have
those
modules
be
reused,
so
one
could
imagine
having
a
collection
of
scripts
that
does
different
diverse
type
of
scenarios
like
networking
like
a
control
plane
like
very
specific
type
checks
or
pulling
information
for
diverse
type
of
of
resources.
A
Could
we
variabilize
script?
Yes,
you
can
I'll
show
that
in
a
second,
let
me
see
here:
yeah
curated
script,
exactly
that's
the
name,
I'm
looking
for
yeah.
You
can
variabilize
your
script.
You
can
variabilize
a
script
from
the
outside.
Actually,
let's
do
that
now,
I'm
we're
approaching
5
30..
Actually,
I'm
doing
a
time
check
on
you
guys.
If
you
want
to
keep
going,
I
can
keep
going.
A
Otherwise
we
can
wrap
up
in
the
next
five
10
minutes,
but
before
we
do,
let
me
do
the
because
I
think
that's
one
thing
I
have
not
talked
about
is
the
ability,
the
ability
to
variabilize
your
or
parameterize
your
script.
So
let's
take
the
script,
for
example,
I
think
yeah
this
one.
It
ran
without
an
issue.
A
Okay,
awesome,
thank
you
for
the
feedback
wally.
So
let's
take
this
guy,
for
instance,
I'm
gonna
copy
it,
I'm
just
gonna
copy
it,
so
I'm
gonna
do
rs
dot
ipa
dpr,
for
instance,
right
all
right.
So
now
what
I
can
do
this
is
capture
crsh.
A
Here,
let's
blow
away
the
worker
and.
A
I
don't
want
to
do
that
and
diagnose
the
script.
Okay,
so
let's
remove
those
anyway.
So
what
I
wanted
to
show
is
this:
what
we
can
do
is
come
here
and
say:
args
equal
space
space.
What
did
I
call
its
ip
underscore,
so
you
can
say
ip
underscore:
abdr
equals.
A
I
can't
remember
if
I,
if
I
need
to
quote
it
or
not,
but
we'll
see
if
it
works,
then
no
quotes
needed.
If
it
doesn't,
then
we'll
figure
out
quotes
all
right.
Let's
run
this
all
right,
so
this
is
the
kind
of
errors
that
you
get
when
you
run
when
you
were
on
your
script
and
it's
very
pyth
pythonesque
and
you
know
yeah,
you
get
a
basically
traced
back
a
stack
trace
rather
of
of
of
your
error,
and
it's
telling
me
that
crash
the
config
whose
parameter
got
a
string.
A
I
want
a
list
yeah
yeah,
yeah,
yeah,
sorry
that
was
my
bad.
A
Let's
put
it
as
a
list:
okay,
let's
try
that
again
there
you
go
so
we
got
the
same
result.
So
this
was
just
to
show
you
that
you
can
parameterize
your
script.
Actually
there's
another
way.
We
can
do
it,
I'm
going
to
do
it
now
too,
so
you
can
see
all
right,
so
the
other
way
can
parametrize.
A
So
this
one
was
with
the
args
the
args
flag,
but
there's
also
args
file.
So
if
you
have
a
bunch
of
these
things,
we
can
come
here
and
I
didn't
mean
to
rerun
it,
but.
A
Let's
wait
for
it
to
finish
all
right:
let's
create
a
file
called
args
dot
txt,
for
instance,
and
we're
gonna
in
that
file.
We're
gonna
put
this
and
then
we're
gonna
get
out
of
that
and
we're
gonna
run,
but
instead
of
args
it's
gonna
be
args
file
and
then
the
file
name
is
back
here
see
I
think
it
was
called
and
it
should
work
there.
You
go
a
couple
ways
you
can.
A
Very
astute,
very,
very
interesting
question
because
you're
an
astute
observer
ymo
right
now
what
happened?
So
the
question
is:
what
happens
if
the
args
was
missing
right
now
the
script
will
fail.
So
one
of
the
thing
I
think
I
have
an
issue
where
it
and
it
fails
badly.
Actually,
actually,
let's
do
it,
let's
change
it
to
ip.
A
So
if
you
do
that
yeah
it
is,
and
basically
it
just
says
that
I
don't
know,
there's
no
ip
attribute
to
that
you.
You
know
that
you
specified
one.
Other
thing
I
want
to
do
is
kind
of
rework
that
argument
section
to
where
it
doesn't,
because
it
there's
some
instances.
You
don't
want
that
catastrophic
failure,
just
because
somebody
didn't
pass
an
argument
you
may
want
to
recover
or
because
this
is
this
is
more
of
a.
A
This
is
more
of
a
parsing
failure,
not
necessarily
executing
fit,
not
a
logic
failure
right.
A
So
you
know
as
it's
parsing
or
it's
internal,
the
internal
structs.
It
sees
that
it
doesn't
have
an
ip
attribute
attribute
for
that
particular
struct.
Then
it
fails,
but
there
are
instances
where
you
don't
want
that
failure.
You
want
something
else
to
happen.
A
A
A
Yes,
that's
actually
that's
so
weimou
points
out
that
bash
has
a
way
to
specify
default
values.
Yes,
that
is
one
way
of
of
approaching
this
as
a
fix
is
to
provide.
A
So
we
could
provide
a
default
value
if
none
is
provided,
but
the
problem
is
the
the
the
data
or
the
the
the
type
that's
used
starlard
structure,
if
it
doesn't
see
if
it
doesn't,
if
we,
if,
if
it
wasn't
constructed
ahead
of
time
with
that
particular
attribute,
then
it
complains
so
having
a
default
value
probably
will
help,
but
I'm
I'm
thinking
that
there
might
be
some
additional
work
that
needs
to
be
done
with
with
that.
A
Something
yes
yeah.
Actually
I
haven't
looked
into
starlight
embedded
set.
Oh,
I
see
what
you're
saying
yeah
to
set
default,
yeah
actually
funny
enough.
One.
One
thing
I
was
considering
is
how
go
does
flag
parsing?
A
One
thing
I
was
considering
is:
do
something
flaggish
for
lack
of
a
better
term
where
you
pass
the
flags,
I
mean
you,
you
consider
the
incoming
argument
as
flags
and
you
provide
something
that
describes
each
flag
and
what
you
should
expect.
I
guess
that
kind
of
plays
into
the
the
suggestion
to
use
default,
but
you
know
you
declare
your
expected.
A
You
expected
arguments
ahead
of
time
and
then
let
starlark
figure
out
and
what
to
do
when,
when
the,
when
the
argument
is
not
there,
so
that
if
you
pass
something
completely
unexpected
it'll
it'll
do
something
like
this,
where
it
completely
crash.
But
if,
if
you
pass
something
that
is
expected,
then
you
can
test
it.
Even
if,
if
it
wasn't
provided
by
the
user,
then
you
can
test
it
and
get
around
the
catastrophic
failure
that
it
does
today
all
right.
Let
me
see.
A
I
think
we've
looked
at
yeah
somebody
put
k-top
on
there
awesome.
Let's
see.
A
I'm
trying
to
think
of
what
oh,
so,
the
the
one
thing
we
haven't
talked
about,
that
I
think
we
did
talk
about
it.
Is
you
look
sorry?
Sorry,
sorry,
but
we
we
talked
about
the
hostless
provider.
I
don't
know
if
we
ever
talked
about
the
kubernetes,
the
yeah,
the
kubernetes
provider.
A
A
I
was
just
checking
out
the
question
that
I
saw
pop
up
on
the
site
on
the
side
of
my
head.
Is
it
hard
to
extend
crash
d
and
add
custom
functions?
Okay,
so.
A
A
A
It's
kind
of
small,
so
this
function
is
a
type
implements,
has
a
signature
for
what
we
call
the
starlark
built-in
function
and
when
you,
when
you
implement
a
function
with
this
signature,
then
that
function
can
be
registered
to
can
be
registered
to
to
respond
to
a
script,
function,
call
and
I'll.
Go
I'll.
Show
you
how
that's
done
so
here
we
have
archive
funk.
A
It
takes
some
arguments
and
they're
examples
of
what
to
do
with
these
arguments,
but
these
are
the
basically
the
arguments
that
comes
from
the
script
and
you
get
them
here
and
then
you
can
parse
them
and
then
now
you
in
in
you
know
you're
in
gold
territory,
and
then
you
can
do
go
as
as
you
do
normally
and
then,
once
you're
done,
you
can
return
a
you
can
return
a
value
back
to
the
script,
which
is
what
we're
doing
here,
but
that
value
has
to
be
a
star,
a
star
lark
value
type
here,
we're
returning
a
star-lord
string
which
has
the
path
of
the
generated
tar
file.
A
Okay,
so
that's
one
step
and
then
what
you
have
to
do
once
you
have
your
function,
you
have
to
come
here.
I
got
to
make
sure
I
remember
where
that's
at.
I
think
it's
here
and
then
you
have
to
register
here
here
we
go
so
here
when
you
call
the
starlark
starlight
meth.
I
think
it's
function
or
method,
I
don't
remember
which
one,
but
basically
once
you
instantiate
the
script
so
that
it
can
run
you
have
to
pass
a
what
we
call
a
string
dictionary
and
basically
it's
a
it's.
A
A
Let's
see
so
archive
that
identifies
that
archive
is
basically
resolves
to
a
string
and
then
that
string
is
now
the
name
of
the
script
function.
A
That
will
be
that
when
it,
when
it
is
encountered,
we
will
be
executing
archive
func,
which
we
saw
earlier,
so
that
is
in
at
a
super
high
level,
how
you
extend
on
the
go
side.
A
Now,
on
the
on
the
star
logic
side,
on
the
script
side,
that's
another
way
you
could
extend
crash
crash
d,
because
what
you
can
do
is
create
functions.
I
think
we
saw
examples
of
functions.
A
A
Okay,
this
one
right
here:
okay,
so
you
can
use
the
dev
keyword
just
like
in
python
to
to
to
to
declare
and
and
define
a
function,
a
high
level
function
and
then,
once
you
have
that
you
can
use
that
function
in
your
code
now.
One
thing
like
I
had
mentioned
that
earlier.
A
Starlark
supports
notion
of
modules,
so
what
you
can
do
is
create
a
collection
of
starlark
scripts
and
each
each
one
of
those
scripts
could
be
a
collection
of
functions
and
then
store
that
close
to
a
crash
d,
and
then
you
can
have
like
a
you
can
have
like
a
like
a
main
script
that
uses
those
modules
as
as
as
as
needed.
So
that's
another
avenue
of
extension
that
you
can
that
you
can
use.
So
I
wanted
to
to
to
point
that
out.
A
A
All
right
other
than
get
her
other
than
the
github
repo.
Is
there
any
other
resources?
Slack
channels?
Yes,
yes,
actually
there
is
a
slack
channel.
Let
me
see
if
I
can
get
it.
A
Yeah,
there's
a
slack
channel
if
you
go
in
if
you're
in
the
right
now.
Actually
it's
not
used
much,
but
right
now
there
is
one
in
under
the
kubernetes
slack
and
it's
just
crash
died
crash
dash
diagnostics,
we'll
put
it
in
the
show
notes
as
well
like
I
said,
it's
usually
the
interaction,
the
the
interaction
is
it's
either
over
twitter
or
internally
or
via
github.
But
you
know
if
you
guys
want
to
reach
out
on
slack
as
well.
Please
do
so
because
we're
trying
to
you
know
that's.
A
One
of
the
reason
why
I
wanted
to
to
do
this
to
gik
is
to
talk
about
star,
starlark
and
and
basically
be
loud
about
it,
and
let
folks
know
that
it's
out
there
what
what
we
want
to
do,
awesome
waleed!
I
see
you
awesome,
thank
you
and
let
folks
know
that
what
we're
doing
with
it
and
eventually
I'd
like
you,
know,
to
keep
working
on
it
and
and
make
it
an
awesome
tools.
I
have
tons
of
ideas
that
I'd
like
to
see,
hopefully
get
into
that.
A
I
I'd
like
to
see
get
into
the
into
the
the
projects
you
know,
as
I
mentioned
earlier
today,
the
the
the
the
use
case
is
definitely
supportability
and-
and
I
think
joe,
I
forgot
the
term
that
joe
used,
but
you
know
what
I'd
like
to
see
eventually
is
more
and
more
functionalities
added
via
the
function
to
where
the
way
that
we
interact
with
the
cluster
is
more
general.
A
A
Thank
you,
joe
and
but
yeah
this.
This
has
been
fun.
This
has
been
joyous.
You
know
extremely
humble
to
be
able
to
share
the
stuff
that
I've
worked
on
for
the
last,
like
I
say
year
and
a
half,
and
hopefully,
after
this
you
know
this
community
exposure
will
get
more
traction
in
the
community
and
have
folks
participate.
A
A
We
need
more,
more
resources
to
get
to
to
make
sure
to
make
things
happen,
but
you
know
even
as
something
that
I
do
with
countless
other
things
that
I'm
involved
in
it
is
very,
very
rewarding
to
to
work
on
on
this
project,
because
you
know
it's
something
that
came
from
you
know
not
existing
to
being
part
of
you
know
being
distributed
with
par
with
our
products
being
open
source
being
used
heavily
internally.
A
So
you
know
super
humble
to
have
had
this
opportunity
to
work
on
it
and
look
forward
for
participation
from
you
and
I'm
checking
to
see.
If
there's
any
more
question
awesome.
Thank
you
all
for
participating.
First,
for
coming
by,
I
think
I'm
gonna
call
it
quits
it's
41
after
five,
so
I've
probably
been
talking
for
about
an
hour
and
a
half
hour
and
45
minutes
now,
so
I'm
going,
I'm
gonna
go
ahead
and
end
the
transmission.
So
again,
thank
you.
Thank
you.
A
Thank
you,
and
hopefully
I
get
to
do
another
one
of
these.
If
I
can
talk
joe
into
let
me
let
me
drive,
drive
this
car
again
awesome.
Thank
you
see
you
next
time
I
am
ending
the
broadcast
bye,
bye.