►
From YouTube: Webinar: Introducing Trackman: an open source tool to sequence application deployment to Kubernetes
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Often times, application deployment to Kubernetes involves multiple `kubectl` and `helm` chart runs. Trackman makes deployment to Kubernetes simple by introducing execution sequence and checks for reliable deployments.
A
All
right
we're
gonna
go
ahead
and
get
started.
I'd
like
to
thank
everyone
who
is
joining
us
today.
Welcome
to
today's
CN
CF
webinar,
introducing
Trackman
an
open-source
tool
to
seek
with
application
deployment
to
kubernetes
I'm
Kaitlin,
Bernard
marketing
manager
at
CNCs
and
I'll
be
helping
to
moderate
today's
webinar
I'd
also
like
to
welcome
today's
presenter
cash
sadati
from
cloud
66.
So
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar,
you
are
not
able
to
speak
as
an
attendee,
so
there
is
a
Q&A
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
any
of
your
questions
and
there
throughout
the
webinar
and
then
we'll
get
to
as
many
as
we
can.
At
the
end,
the
session
is
being
recorded
and
will
be
sent
out
afterwards,
along
with
the
link
to
the
presentation.
So
with
that
I'll
hand
over
to
cache
to
kick
off
today's
presentation.
B
B
So
as
I
said,
I
work
for
class
66,
I'm,
primarily
looking
after
the
product
part
of
things
at
class
66,
and
we
usually,
we
have
a
couple
of
products
I
actually
for
products
that
we
sell
as
a
service.
But
when
it
comes
to
open
source,
we
are
a
company
that
produces
open
source
projects
fiber
for
the
community,
that
we
are
active
in
or
just
as
internal
tools
that
we
have
and
we
wanted
to
share
with
other
people.
B
So
what
we
do
is
selling
products
our
products
help
small
businesses
to
with
their
DevOps
mission
and
organizing
their
infrastructure,
in
a
way
that
you
know
if
they
don't
have
an
my
office,
arteries
or
UPS
people,
our
products
are
very
useful
to
them,
but
the
open
source
part
of
it
is
only
projects
that
we
share.
We
are
not
an
open
source
company
and
we
don't
make
money
from
this
and
the
reason
I'm
saying
this
and
iterating
on
it.
B
Is
that
because
I
know
that
there's
a
lot
of
sensitivity
around
pitching
our
products
and
you
know
any
event
using
an
open-source
platform
or
a
forum
like
CN
CF,
to
pitch
any
products.
Frankly,
these
open
source
projects
we
have
are
sponsored
by
class
66.
We
usually
use
it
internally
and
we
do
that
just
to
get
problems
solve
for
other
folks
around
as
well.
B
So
with
that,
some
of
the
projects
that
we've
rolled
out
before
and
track,
one
is
the
is
the
fixed
one
around
that
the
first
one
was
starter,
which
is
a
small
tool
that
reads
the
code
and
any
code
base
and
tries
to
understand
the
framework,
the
language
and
some
intricacies
around
each
one
of
those
frameworks
and
from
there
it
starts
to
get
docker
file
and
some
other
artifacts
generated
based
on
the
source
code.
Things
like
some
kubernetes
manage
a
manifest
file
and
we
get
to
that
a
little
bit
later.
B
The
second
one
was
habitus,
which
is
a
build
workflow
for
for
docker.
It
does
two
primary
things.
One
is
to
create
a
step-by-step
workflow
for
dhaka
around
building
one
stuff
after
another,
which
you
can
also
do
with
multi-step
builds
in
dhaka,
but
another
thing
that
a
lot
of
people
find
useful.
That
habitus
does
is
that
it
allows
injection
of
secrets
into
the
during
the
build
process
of
docker
images,
which
sometimes
is
very
useful.
B
When
you
have
private
repositories,
when
dependencies
like
npm
or
ruby,
gems
are
not
yet
that
you
want
to
have
dependencies
that
live
inside
of
places
that
they
can
require
secrets
or
api
secrets
for
some
parts
of
the
build
and
habitus
is,
is
a
good
tool
to
use
that
without
leaving
any
secrets
left
behind
in
the
image
copper's
our
third
product
that
we
rolled
out
and
it's
like
a
unit
test
for
kubernetes.
It's
got
a
very
simple
DSL
that
you
can
write.
B
It
runs
on
your
community's
yellow
files
or
manifest
follows
JSON
files
and
make
sure
that
they
comply
with
rules
that
you
set,
so
it
you
can
think
of
it
as
an
FX
cop
for
docker
for
kubernetes
manifest
files,
all
Trent
was
the
last
one
that
we
rolled
out
a
while
back
and
you
can
think
of
it
if
you
familiar
with
AFS
LT,
which
is
a
compare
to
the
conversion
language
around
XML,
it's
kind
of
the
same
thing,
but
for
llamo,
and
it's
not
as
difficult
to
use
as
XSLT.
It
uses
JavaScript.
B
So
you
can
write
JavaScript
to
transform
one
Yama
file
to
another.
One
and
I
think
I
can
touch
up
on
that
a
bit
later
as
to
why
that's
useful
and
the
last
one
which
today
I'm
going
to
show
you
is
Trackman,
which
is
a
command
sequence
management
system
for
communities
I
mean
technically,
it
can
be
used
for
any
sequence
of
commands.
But
we
built
it
for
communities
in
mind
because
we
had
certain
problems
we
needed
to
solve.
So,
starting
with
starter,
you
can
find
it
at
www.andyjenkins.com.
B
And
elasticsearch,
it's
detected
that
I
have
I'm
using
latest
version
of
Ruby
and
from
there
it's
going
to
analyze,
generate
background
and
foreground
processors
for
me
generate
a
bunch
of
code
files
and,
in
this
case,
I'm
using
it
to
generate
docker
compose
as
well,
so
from
no
container
based
application.
I
end
up
with
something
that
can
get
me
started.
It
obviously
generates
those
files
and
you
can
take
it
from
there
and
then
enhance
it
and
modify
it
and
what
meant
it.
B
Used
in
the
next
step,
but
also,
more
importantly,
sometimes
I've
seen
people
use
habitus
to
build
projects
within
a
specific
environment
that
they
don't
have
available
on
the
laptop,
for
example,
the
ironic
get
Mac
and
they
want
to
build
something
for
a
specific
framework
or
architecture,
and
they
do
that
inside
of
a
docker
image,
and
then
they
pull
out
the
artifacts
as
just
binary
files
and
then
uploaded
as
part
of
a
release
process.
For
example.
B
The
second
thing
that
you
see
at
the
bottom
is
a
snippet
of
a
docker
file
with
an
argument
host
generated,
and
you
see
there
that
I'm
running
a
double
you
get
or
we
can
do
it.
Carol
that
is
hooking
up
to
a
host,
which
is
the
the
binding.
The
port
for
habitus
itself
when
is
running
so
from
within
the
docker
bill.
B
I'm
actually
doing
a
Carol
outside
to
habitus
to
pull
out
a
silk
secret,
a
privatization
key
in
this
case
and
then
run,
for
example,
some
connect
to
github
to
my
private
repository
and
within
the
same
dr.,
fine
line
I'm
removing
that
file.
So
there's
no
trace
of
it
in
that
layer
and
that's
a
very
safe
and
secure
way
to
to
reuse
the
secrets
during
the
build,
but
not
leave
them
as
some.
B
Sometimes
you
you
end
up
forgetting
to,
for
example,
delete
them
where
you
leave
it
in
as
a
layer
and
image
and
that
that's
what
habitus
can
be
very
helpful
with
you
can
find
habitats
at
habitus
do
copper's
a
third
project.
So,
as
you
can
see,
it's
got
a
very
simple
dsl
and
what
it
does
is
around
making
sure
that
you
can.
You
can
set
up
rules.
B
Things
like
you
know,
don't
use
the
latest
image
on
on
your
docker
file
and,
if
you're
familiar
with
the
Giacometti
first
of
all,
which
I'm
sure
you
are
you'll,
see
that
I'm
doing
a
fetch
of
there
all
the
images
that
I
have
on
them
all
of
my
containers,
for
example.
In
this
case
it
could
be
a
deployment
and
and
copper
understands
the
syntax
of
an
image,
a
docker
image.
It
therefore
can
interpret
that
string.
B
And
then
you
can
have
these
rules
dovetailed
right
after
the
the
files
are
produced
with
your
pipeline
or
CI
CD,
and
then
before
they
hit
the
cluster,
you
can
make
sure
that
they
comply
and
conform
to
your
rules
and
things
eg.
You
can
find
out
the
classic,
sitcom,
/,
copper
and
I.
Think
we
talked
about
this
before
alternate,
on
the
other
hand,
is
a
simple
way
to
convert
ya
mold
files-
and
this
is
a
very
simple
example,
because
this
session
is
not
about
altering.
B
But
here
you
can
see
that
I'm
using
javascript
and
very
simple
JavaScript
snippet
that
add
an
annotation
with
a
deployed
at
to
any
service
that
I
have
in
my
docker
file.
So
dollar
a
dollar
here
is
like
the
two
dollar
signs.
There
is
always
loaded
with
the
file
that
the
the
Yamal
file
that
you
load
into
kubernetes
and
then
I'm
going
through
each
section
each
one
of
those.
B
If
you
have
a
multi
party
for
multi
party
Emma,
for
example,
if
it's
a
service,
then
I'm
just
going
to
add
an
annotation
to
it,
and
this
case
is
an
annotation,
that's
dynamic.
But
you
can
imagine
that
we've
used
this
for
to
inject
sto,
for
example,
into
into
any
deployment
that
we
have
going
and
hitting
our
cluster
and
the
reason
we
use
this
as
opposed
to
you
know
our
operators,
or
any
other
way
of
doing
it,
which
are
similar
ways,
is
that
we
want
it
to
be
transparent.
B
So
we
know
first
of
all
is
version
control.
So
we
put
these
files
into
into
git
repository,
but
also
we
want
to
make
sure
that
we
can.
We
can
inspect
the
artifacts
that
it
generates,
and
then
we
put
those
into
a
git
repository
as
well,
and
then
we
applied
within
within
our
normal
CI
CD
pipeline
to
make
sure
there's
no
magic
happening
behind
the
scenes,
and
all
clusters
are
just
vanilla
communities.
Clusters
are
trust,
seeing
depending
regardless
of
their
their
environment
and
that's
altering
you
can
find
it
out
class.
This
is
calm.
B
B
So
it
goes
out
into
a
multiple
kubernetes
clusters
and
delivers
its
service
to
our
customers
and,
as
part
of
that,
one
of
the
things
that
we
wanted
to
do
was
to
make
sure
there's
a
sequence
of
things
running
when
it
when
it
hits
the
cluster,
for
example.
Sometimes
we
have
a
database
migration
that
we
need
to
do
before
the
code
goes
up
now,
most
of
the
time
those
database
migrations
are
backward
compatible.
B
So,
while
the
database
migrations
happening,
you
can
still
run
the
old
code
and
you
can
run
the
new
code,
but
once
the
new
code
goes
that
the
new
database
is
migrated.
Sometimes
we
need
to
then
make
sure
that
migrations
happened.
Everything
is
OK,
HOA
schema
is
in
correct
shape,
and
then
we
can
roll
out
the
next
step
of
the
deployment,
which
is
the
application
itself.
B
So,
if
you
think
about
it
in
terms
of
career
days
language,
you
might
put
your
database
migrations,
for
example,
in
a
day
in
a
Coweta
days
job,
and
then
your
application
is
obviously
going
out
as
I'm
gonna
diamond,
directly
cassettes
or
deployments
or
stateful
sets,
or
whatever
you
have
over
there.
You
need
to
make
sure
that
there
are
two
files,
for
example,
that
you
use
against
coop
control.
B
The
first
one
will
deploy
a
job
and
the
second
one
will
deploy
about
deployments,
but
you
want
to
make
sure
the
second
one
is
always
happening
after
the
first
one.
Once
it's
out
once
it's
complete
and
it's
successful
and
given
the
fact
that
kubernetes
runs
in
an
asynchronous
manner,
just
tryna
get
coop
control
apply.
My
job
is
not
going
to
guarantee
that
first
of
all,
the
job
is
going
to
succeed
is
complete
it.
So
I
can
do
the
next.
B
We
could
have
done
this
with
a
simple
batch
file,
but
frankly
nobody
in
between
the
companies,
if
there's
a
fan
of
batch
files,
that's
the
first
thing.
A
second
thing
you
want
it
to
be
developer
friendly,
want
it
to
be.
We
want
this
to
be
something
that
we
can
just
roll
out
and
have
a
single
file.
That
would
run
a
bunch
of
commands
on
your
on
your
machine
and
it
will
deploy
the
entire
application
on
kubernetes
without
doing
anything
custom.
B
B
Job
a
simple
job
here,
that's
calculating
the
number
of
the
pi
value
of
pi
I,
think
up
to
10
digits
and
takes
about
on
my
laptop
it
takes
about
10
seconds
or
so
what
I
want
is
I
want
to
run
this
as
a
job
make
sure
it's
complete
and
then
clean
up
after
after
myself.
So
normally,
if
you
were
to
do
this
on
a
quit,
adjuster
I
think
I
have
a
mini
coop
here.
B
B
B
Okay,
so
what
I've
done
now
is
I
just
ran
the
job
and
it
was
asynchronous
so,
as
you
can
see,
it's
still
running
we'll
take
another
couple
of
seconds
to
finish
yeah.
So
it's
done
now
nothing
very
exciting.
Here
we
just
find
the
job
now,
let's
take
it
one
step
further.
So
what
I
want
to
do
now
is
I
want
to
define
what
we
call
them
track.
Magnet
probe
so
I
want
to
run
the
job
and
I
want
to
wait
for
up
to
one
minute
for
the
job.
B
B
B
Well,
now
it's
it's
a
it's
rendered,
the
previous
job,
yeah
jobs
finished.
So,
as
you
can
see,
if
I
do
this
again,
I'm
running
the
job
itself
and
I'm
running
the
probe
this
time,
I'm
gonna
run
it
with
a
lot
of
level
of
debug
just
to
see
what's
going
on.
Okay,
so
condition
met
and
it's
finished
so
now
I
have
the
job
running,
but
still
there
is
the
pod
that's
left
over
and
if
I
run
it
again,
as
you
can
see,
it
would
bounce
back
very
quickly.
B
So
in
order
for
me
to
run
this
job
over
and
over
again,
if
it's
a
regular
job
version,
I
need
to
clean
up
after
myself,
and
then
here
I
can.
What
I
can
do
is
I
can
say,
there's
another
step
and
that's
thinking
about
the
job,
and
this
depends
on
the
previous
step
run.
If
I
don't
mention,
this
step
depends
on.
B
It
will
try
to
run
as
many
jobs
as
it
can
or
as
many
steps
as
we
can
in
parallel,
depending
on
how
many
cores
I
have
on
my
my
machine
and
then,
if
I
have
any
dependencies,
it
will
basically
build
a
graph
which
ones
depend
on
what
and
how
many
can
run
trying
to
maximize
the
speed
of
this
run.
So
it
can
get
a
bit
clever
in
terms
of
how
many
jobs
it
can
run.
But
let's
just
keep
it
simple
here.
B
B
I
think
I
need
to
set
the
time
out
a
bit
okay.
So
now,
I
had
the
cleanup
job
running
now.
One
of
the
things
that
we've
seen
is
that
yeah
or
is
no
job.
One
of
the
things
that
we've
seen
is
that
if
I
give
this
to
someone-
and
they
don't
have
good
control
set
up
for
example,
then
it's
not
gonna
be
any
good.
I
might
not
always
have
good
control
here
might
have
other
things
like
helm,
charts,
for
example,
I
need
to
install
the
helm.
B
Chuttan
you
might
not
have
helm
installed
on
your
machine
or
I
might
want
to
make
sure
that
not
only
do
you
have
helm,
but
you
have
and
tiller
installed
on
the
cluster
that
Helms
connected
to
as
well,
and
those
are
those
mean
that
you
can
have
some
pre-flight
checks
here
as
well.
So
in
this
specific
case,
I've
done
a
simple
cook
control
version
which
here
I'm
relying
only
on
the
value
of
the
the
exit
value
being
nonzero,
and
that
is
good
enough
for
me.
B
So
I
know
that
you
have
good
control
installed
on
your
machine,
but
you
can
you
can
you
can
go
further
and
check
the
version
to
make
sure
that
you
have
the
specific
version
of
cook
hotel
or
anything
else
that
you
might
want
to
have
to
check
as
a
as
a
pre-flight
before
you
start
running
this
step,
and
for
that
what
I
can
do
so?
I
can
just
run
this
guy
here
you
see
that
I'm
running
the
pre-flight.
For
that
step,
which
now
runs
and
returns
the
version
of
unit
just
adjust
the
output
of
that
command.
B
Then
it
runs
the
probe,
the
command
and
the
probe,
and
they
clean
up
and
everything
in
one
go
so
technically.
What
I
can
do
is
I
can
just
give
this
file
to
someone
and
they
have
trypan
installed
and
they
will
be
able
to
just
deploy
something
after
or
not
so
that's
kind
of
the
basics
off
of
what
Trackman
does
in
a
nutshell,
but
there's
a
lot
more
inside
the
the
tool
that
can
and
can
can
take
care
of
things.
B
You
can
have
metadata,
obviously
like
any
other
communities
and
asset,
but
as
well
as
that
you
can,
you
can
use
that
metadata
to
menu
plate
the
date
other.
The
commands
that
you
run,
for
example,
in
this
case
I,
have
a
staff
that
just
checks
the
arguments
of
something
I
can
set
the
work
directory,
but
okay,
I
can
also
get
the
value
of
this
specific
metadata,
which
is
this
string
and
pass
it
into
my
command.
This
way
I
can
I
can
I
can
have
a
fixed
command
with
different
metadata
and
run
different
types
of
commands.
B
B
That's
a
very
simple
way
for
for
for
a
grant
to
run
a
sequence
of
commands,
it's
a
very
simple
tool,
nothing,
nothing,
really
magical
about
it
apart
from
developer
experience.
So
what
we've
seen
is
that,
instead
of
if
you
wanted
to
deploy
the
entire
stack,
for
example,
of
an
application
that
you
have
usually
what
it
happily
have
to
do,
is
you
have
to
sip
on
write
a
bunch
of
things
once,
for
example,
things
that
set
up
your
cluster?
You
know
things
like
the
the
credentials
to
your
doctor.
Repository
is
one
setting.
B
Some
parameters
around
are
back
on.
Your
cluster
is
a
second
set
of
things
that
we
run
and
once
those
are
finished,
and
those
are
things
that
we
run
only
once,
but
there's
a
sequence
of
those,
then
the
second
batch
of
things
that
we
run
are
usually
around
some
help:
charts,
for
example,
that
will
deploy
basic
components
of
our
application,
like
databases,
my
sequel,
Redis
and
potentially
like
some
activemq,
Nats
or
other
things
around
messaging,
and
then
we
get
to
the
crux
of
it,
which
is
the
application
itself
now.
B
Sometimes
you
might
bundle
your
application
to
a
helmet,
our
we
bundle
it
into
some
:
formation.
But
regardless
of
that,
if
you
use
customized
or
anything
else,
you
will
end
up
with
multiple
steps
that
you
have
to
run,
and
one
thing
that
we
wanted
to
do
is
to
make
that
also
predictable
as
predictable
and
repeatable.
B
As
as
communities
application
communities,
managers
fault
itself
on
communities,
you
have
the
same,
College
can
run
over
and
over
again
it's
great,
but
then
we
don't
have
the
one
level
above
that
which
is
a
sequence
of
other
things
that
you
are
in
order
in
the
right
sequence
and
in
the
right
order
to
end
up
with
the
entire
stack
to
be
deployed,
and
that's
why
we
created
track
by
itself.
You
can
find
track
man
on
github
classic
six
OSS,
which
is
an
open
source
project
repository
or
project,
is
home.
B
B
Some
much
stack
and
pipe
it
to
this,
and
that's
the
old
then
in
all
the
commands
that
I
need
to
run
and
to
get
everything
going
from
from
the
last
step.
To
have
my
entire
stack
to
point.
That's
just
a
very
quick
intro
to
track
meant.
As
I
said,
we
built
it
ourselves
to
make
sure
the
entire
stack
is
deployed
every
time
in
a
predictable
way,
but
we've
also
released
it
as
an
open
source
tool.
B
Another
thing
that
you
can
do
with
Trackman
is
you
can
use
it
as
a
go
library,
not
just
as
an
executable
you
can.
You
can
include
it
into
other
projects
that
we
have
here.
You
have
as
a
region
and
go
as
a
library,
and
it
will
not
taking
care
of
executing
everything
and
can
take
care
of
the
processes
killing
them
within
the
library
itself.
So
you
don't
need
to
worry
about
those
things
within
the
application.
That's
your
body!
That's
that's!
A
A
A
A
A
B
At
this
point
we
have
the
asynchronous
nature
is:
is
slightly
difficult
because
the
tools
written
in
in
go
so,
as
you
know,
he
running
processes
would
go
that
are
a
sickness
or
not
as
it's
easier,
it's
much
easier
to
have
less,
but
it's
much
easier
to
run.
A
synchronous,
Esther
process
with
wit,
go
as
opposed
to
just
just
letting
go,
so
one
of
the
options
is
to
keep
track
men
alive
while
the
asynchronous
stops
running.
But
then
again
the
whole
purpose
of
Trackman
was
to
to
make
sure
things
happen
in
a
in
a
predictable
way.
B
So
the
fire
I,
forget
would
have
would
have
kind
of
defeated
the
purpose
in
that
in
that
sense,
that
we
wouldn't
know
if
the
deployments
gone
through
fun
if
we
had
if
we
were
to
do
any
synchronously.
However,
this
doesn't
doesn't
mean
that
we
cannot
make
it
a
synchronous
and
just
have
a
fire
again
at
this
point
of
the
route
we
don't
have
any
requirements.
I
would
love
to
know.
If
you
have
any
specific
use
cases
that
we
can
do
this,
and
definitely
we
can
look
into
it.
A
All
right
another
question
here,
another
question
here:
so
how
how
is
starter
different
than
sui
of
OpenShift
or
is
it
similar
I.
B
B
Container
based
in
nature,
you
can
think
of
it
as
a
very
generic
way
of
running.
You
know
steps
after
after
each
other
in
a
predictable
way,
and
you
know
take
care
of
it
that
way.
Apart
from
that,
there
isn't
there's
not
much
in
any
philosophy,
basically
they're
very
much
the
same.
You
wanted
something
to
have
something:
that's
more
generic,
because
we
just
don't
use
it
all
the
time
for
communities,
that's
the
that's!
The
only
difference
or
continuous,
without.
A
A
B
Yes,
usually
it's
it's
mostly
around,
so
a
couple
of
things
here.
One
is
about
how
to
answer
your
question
directly.
Yes,
that
is
correct.
Dragwon
sits
on
top
of
other
things.
We
use
it
for
how
part
of
the
steps
are
helm,
and
sometimes
some
of
them
are
just
pure
good
control.
B
Things
that
we
deploy,
but
there's
one
specifically
but
and
think
about
helm
and
I
mentioned
briefly
mentioned-
that
we
use
formations
instead
of
helm
for
our
own
application
deployment
is
that
we
use
helm
for
application
package
deployment
not
for
application
to
program
for
package
deployment,
because
we
see
helm
technically
as
a
kubernetes
package
manager.
The
same
way,
for
example,
NPM
is
for
nodejs
or
Ruby
gets
off.
Gems
are
for
Ruby
and
I
know
that
you
can.
B
Think
that's
why
a
lot
of
other
things
have
been
merged
into,
for
example,
good
control.
You
know,
customers
is
one
and
other
ones
that
are
that
are
specific
about
application
management
we
tend
to
use
formations
as
a
result.
Fragment
sits
on
top
of
those
things
and
helm
is
part
of
the
bigger
chain
where
it
deploys
the
basic
components
of
the
application,
but
the
rest
of
things
that
it
deploys
are
going
through,
for
example,
formations
or
aku
control,
directly
dump
them
themselves.
B
A
Awesome
so
I'll
give
everyone
a
last
chance.
If
there's
any
other
questions,
you
would
like
us
to
answer.
Otherwise,
the
information
and
all
of
the
links
are
now
in
the
chat
and
will
be
available
in
the
slides
when
we
send
those
out
in
case
you
have
any
more
questions.
I
want
to
get
more
information
once
you
start
diving
into
Trackman,
so
with
that
I
think
I
will
close
this
out
for
today.
Thank
you
again.
Cash
for
the
great
presentation
and
the
we're
now
recording
and
slides
will
be
online
later
today.