►
From YouTube: Platform Sync: 2020-08-26
Description
* Status Updates
* Release Planning
* pack 0.14.0 - https://github.com/buildpacks/pack/milestone/21
* docs from kubecon - https://github.com/buildpacks/docs/milestone/10
* Windows dev tooling
A
A
A
That
hadn't
been
my
plan.
I
just
filled
out
the
the
notes
section
for
this
week,
but
I
can
I
can
talk
for
a
little
bit
so
hello,
everyone.
We
may
have
a
few
new
faces
to
this
meeting,
so
I
don't
know
if
anyone
that's
not
been
along
wants
to
say
a
quick
hello.
D
Y'all
might
know
me,
but
micah
working
a
lot
of
the
windows
contributions
with
some
other
folks.
Here.
B
E
A
C
Week
I
could
speak
a
little
bit
to
some
of
the
things
that
are
going
on
in
the
background,
so
it
does
seem
like
we've
gotten
a
little
bit
of
traction
or
community
input
from
I'm
not
entirely
sure
if
it's
kubecon.
I
know
that
google's
also
have
posted
some
cloud
next
related
blog
posts
about
their
integration
with
build
packs.
So
we've
seen
a
a
slight
update.
Uptick
of
you
know,
new
members
coming
in
asking
for
help.
C
Some
of
the
things
that
are
coming
up
are
related
to
like
enterprise
environments.
So
a
lot
of
you
know
proxies
http
proxies
linux
environments
with
very
specific
security
constraints.
I
think
we've
found
you
know
some
environments
with
older
docker
versions,
not
working
quite
well,
so
a
lot
of
really
positive
feedback.
Well,
you
know
resolution
of
positive
feedback,
so
so
that's
been
a
lot
of
the
the
work
upcoming
there.
C
One
of
the
things
that
has
also
come
out
of
it
is,
I
created
a
milestone
for
docs,
which
I
know
that
that
isn't
you
know
pertinent
to
this
specific
sub
team,
but
is
maybe
something
worth
looking
at
so
I'll.
Add
it
as
part
of
the
release
planning,
but
I've
created
a
milestone
for
things
about
just
questions
that
have
come
up,
that
we
probably
want
some
better
documentation
around.
C
So
it's
me
talking
again
so
there's
the
milestones
I've
created
one
for
pack.
I
put
it
in
the
notes
section.
So
there's
one
for
pac.
It's
got
quite
a
bit
of
stuff
in
there
already
that
we've
started
working
on
by
all
means.
C
E
Should
we
add
things
like
corporate
proxy
faq
or
something
to
that
kubecon.
C
So
that's
one
of
the
things
that
is
still
to
be
added.
There
is
an
issue
that
was
just
opened
about
docker,
mirroring
or
docker
registry
mirroring
proxy,
which
is
something
that
I'm
also
kind
of
going
back
and
forth
with
them.
But
if
you
want
to
keep
an
eye
out
for
all
this
stuff,
this
is
the
milestone
that
you
want
to
look
at
and
see
if
we
could
help
out
there.
A
C
A
Okay
windows,
dev
tooling,
so
I
know
we've
invited
a
couple
of
folks
from
our
windows
containers
team
here
in
vmware
to
talk
about
how
the
actual
development
process
works.
Take
it
away.
B
Yeah,
so
I
guess
one
of
the
discussions
we've
kind
of
been
having
is
balancing
some
of
the
tooling
that
we
can
use
at
like
vmware
versus
what
open
source
contributors
have
to
use
when
they're
developing
against
pact
and
inevitably
run
into
a
bunch
of
windows
tests
that
they
need
to
get
to
pass
or
broken.
And
so
before.
We
like
spend
some
time
and
come
up
with
solutions
that
work
for
us
probably
want
to
figure
out.
What
this
like
common
piece.
Is
that
we're
going
to
give
to
everybody
so.
D
I
I
think
that
makes
a
lot
of
sense.
Yeah
and
I've
been
sort
of
feeling
this
just
to
lay
out
how
we
tend
to
work
for
windows,
stuff,
we're
usually
developing
on
macs.
We
usually
have
a
readily
available
windows,
daemon
somewhere,
usually
running
as
a
vm
on
our
macs,
and
we
usually
just
set
docker
host
equals
ip
address
for
the
vm
and
run
all
the
tests,
and
they,
you
know,
run
against
windows
and
generally
I'll
pass
if
we
did
our
jobs
right,
but
that's
it's
relying
on
a
few
things.
D
Obviously
we
have
to
have
a
license
for
the
vm
to
run
it.
We
also
have
potentially
a
license
for
fusion,
which
we
usually
use
to
run
the
vms
on
there
too,
and
it's
also
not
the
most
secure
setup.
The
vms
are
running
with
an
exposed
unencrypted
tcp
port,
so
it
really
makes
that
our
particular
solution
only
work
for
a
local
scenario
and
all
the
licensing
package
that
comes
with
that.
D
So
I
I'm-
and
you
know
we
kind
of
just
solved
for
the
immediate
need
that
we
had
technically
speaking,
there's
a
different
world
that
we
could
have
gone.
We
could
have
been
developing
entirely
on
windows
machines
using
a
local
windows
daemon.
D
I
feel
like
that.
Still
has
all
the
same
licensing
baggage
with
it,
but
maybe
it's
easier
to
assume
that
most
devs
would
have
a
windows
license,
but
not
necessarily
fusion
and
mac
and
all
the
rest
of
that,
but
it
also
comes
with
just
a
code
cost
that
most
linux
make
scripts
and
tools
and
things
don't
necessarily
work
or
they'll
work
on
the
mac,
but
they
won't
necessarily
work
on
windows,
and
so
I
feel
like
the.
D
What
we've
tried
to
do
is
to
keep
our
development
process
as
similar
as
possible
to
what
we
think
would
be
the
average
contributor
to
cnbs.
But
in
the
meantime
you
know
bolt
on
this.
One
extra
thing
that
we
think
is
is
useful
for
us.
Now
we
wanted
a
solution.
I
think,
having
the
options
that
we
did
of
getting
licenses
in
that,
but
there's
a
there's
a
few
different
ways
to
tweak
that
same
model
that
default
working
on
a
mac
or
a
linux
desktop
and
setting
a
docker
host
to
a
docker
host
somewhere
else.
D
The
vm
doesn't
necessarily
have
to
be
local.
You
can,
you
know,
play
with
setting
make
a
secure,
docker
daemon
that's
running
up
on
a
gcp
instance,
or
something
like
that,
or
it
could
be
running
on
your
windows
machine.
D
You
could
be
doing
you
know
if
you're
a
windows
developer,
you
could
be
running
on
wsl
on
actual
windows
machine,
but
in
a
linux
mode
targeting
a
daemon,
that's
sitting
there
too
whole
bunch
of
different
knobs
to
turn.
So
I
wouldn't
want
to
to
assume
that
the
way
that
we
do
it
is
necessarily
the
right
solution
for
the
community
contributors
and
yeah
I'd
love
to
hear
other
thoughts
or
approaches,
especially
pain
points
too.
That
folks
feel
like
they're
having.
C
D
The
volumes
are
local
to
the
remote
vm.
So
if
you're
mounting
a
path,
you're
buying
mounting
a
path
into
that
container,
it's
going
to
be
from
your
window.
Remote
windows
demon,
there's
ways
to
to
sort
of
mitigate
that.
D
We've
tried
to
write
all
the
make
tasks,
so
they
don't
rely
on
that
kind
of
functionality
and,
like
some
of
the
other
tooling
around
there
it'll
use
a
docker
cp
instead
to
move
the
source
code
over
there.
Before
you
run,
everything
tried
to
yeah,
but
a
few
of
those
at
least
just
from
a
usage.
A
pack
user
perspective
might
be
a
little
surprising,
but
from
a
dev
perspective,
I
think
we've
worked
around
most
of
the
quicker
bits.
C
Yeah,
I
guess
what
I'm
thinking
is.
You
know
I
I
think
in
this
context
we're
talking
about
a
contributor
development
experience
right
and
in
that
sense
I'm
trying
to
think
of
what
that
would
mean
if
you
were
to
need
to
create
a
new
test
for
windows
volumes
with
a
remote
docker
daemon.
I
think
feels
very
strange
right
and
I
guess
maybe
the
part
that
I
also
question
is
whether
the
statement
of
having
a
you
know-
mac
or
linux
development
environment,
to
like
that.
C
D
Yeah,
if
they
already
have
a
windows
license,
like
I
feel
comfortable
saying,
do
all
your
development
and
windows
subsystem
for
linux.
You
have
a
nice
little
windows
terminal.
There
windows
kernel
that
moves
everything
over
for
you
like
run
it
there,
and
then
it's
almost
like
working
on
a
native
linux
machine.
D
You
can
switch
your
back-end
from
linux
containers
to
windows.
Containers
like
I
feel
like
that
wsl
case
is
pretty
pretty
smooth
and
kind
of
I
kind
of
almost
want
to
do
it
myself.
If
you
feel
like
that
would
be.
If,
if
that's
acceptable
to
exp
to
say
for
someone
who's
using
windows
desktop
just
use
wsl,
then
I
feel,
like
that's-
might
be
solved
already.
C
I
guess
I
don't
know
I
I
think
I
brought
this
up
at
some
point
with
some
individuals,
but
I
was
curious
whether
or
not
breaking
up
the
like
the
two
components,
right,
which
is
one
having
a
development
machine
like
a
docker
host
machine
right,
one
that
has
docker
setup
and
has
whatever
tooling
might
be
necessary
for
local
development
within
that
windows,
domain
or
contacts.
C
That
could
be
one
piece
of
the
puzzle
right
and
then,
if
you
then
need
it
to
attach
to
it
remotely,
because
you
happen
to
work
on
a
macro
linux
machine,
then
that's
a
different
piece
of
the
puzzle
with
different.
You
know
configuration
or
guidance,
and
so
I
I
think
if
we
break
it
that
way,
and
then
we
talk
about
the
former
where
everything's
set
up
within
a
docker
con
or
sorry
windows
context
with
wcal
in
a
windows
machine,
then
those
individuals
that
do
happen
to
develop
on
windows
automatically
get
that.
D
I
feel,
like
that's
a
good
call
yeah.
I
feel
like
the
hybrid
case,
that
we
support
now
is
probably
or
that
we
as
the
maloney
and
myself
and
some
of
our
team
members
is
a
little
strange.
D
It
happens
to
work
pretty
well
just
once
we
figured
it
out,
but
I
feel
like
if
you're,
if
you
and
I'm
I
am
pretty
sure
this
does
work
now,
but
if
you're,
assuming
that
you
have
a
windows
developer,
I
feel
like
they're
in
the
ideal
case,
because
then
they
can
work
all
in
a
linux
terminal
and
then
swap
switch
to
the
lcao
wcal.
However
they
wish
to
and-
and
I'm
pretty
sure
that
already
works
really
well,
I
feel
like
their
dev
experience
is
probably
best
and
better
than
ours.
C
Yeah-
and
in
that
case
it's
just
about-
and
I
think
this
is
where
the
the
the
question
of
whether
or
not
we
can
improve
this
comes
into
play,
it's
there
is
still
some
development
set
up
right
like
you
have
to
probably
set
up
cygwin
right.
If
you
want
to
run
bash
and
make
to
be
able
to
do
the
local
development,
I
don't
know
if
we
want
to
be
suggestive
on
the
ide
right,
like
vs
code
versus
something
else,
but
it's
those
minor
setups
right.
C
I
I
I
see
simon
shaking
and
said
I
agree
we
should
not
be,
but
nonetheless
like
there
is
some
development
set
up
that
we
need
in
order
for
you
to
be
able
to
build,
like
you
have
to
install
go
right
like
that
kind
of
stuff,
and
so
a
lot
of
that
to
me
comes
as
like
automation.
Right
like
I
wonder
if
it
would
be
helpful
to
be
able
to
have
some
setup
script
or
automation
that
says:
okay,
just
run
this
and
it'll
set
up
all
the
stuff
necessary
right.
How
well
it's
compartmentalized
and
reusable.
D
That's
a
great
idea,
in
fact,
actually
anthony
on
our
team
has
done
a
little
dev
automation
for
the
sort
of
you
know,
compartmentalized
vms,
that
we
have
it
uses
packer.
D
But
ninety
percent
of
what
it
does
is
just
run
local
scripts
inside
of
a
vm
to
configure
it
to
be
in
the
right
state.
We
haven't
quite
finished
it.
It's
also
meant
for
standing
up
fusion
vms
right
now.
I
think
it's
super
easy
to
port
to
run
on
any
platform
that
packer
runs
on.
But
if
you
feel
like
that's
like
packer
is
a
viable
tool.
We
could
kind
of
polish
this
off
the
work
that
we've
started
so
far.
C
E
Just
for
some
context,
like
we,
you
know,
hear
salesforce,
I
think
we're
issued
either
linux
or
mac
by
default.
I
think
I
do
have
a
windows
laptop
that
was
issued.
It
does
not
it's
not
up
to
date
enough
to
get
wsl
2.
I
don't
think
I
don't
think
they
enabled
that
by
like
default,
so
just
kind
of
throw
that
out
there,
which
does
cause
some
inconsistencies
in
the
tooling.
E
Obviously,
as
well
as
performance-
and
I
guess
from
my
personal
standpoint,
it
would
be
nice
if,
if
I
put
a
pr
up
to
packer
lifecycle-
and
there
are
windows
tests-
that
I
think
the
main
thing
is
making
sure
the
tests
fail
in
a
way,
that's
easy
to
read
for
people
who
are
unfamiliar
or
less
familiar
with
windows,
as
well
as
potentially
at
least
a
documentation
on.
If
you
don't
have
a
windows
box,
what
do
you
do?
Can
you
like
fork
life
cycle
and
then
can
your
github
action?
E
Can
we
have
a
default
github
action
that
maybe
you
can
put
like
an
ingrok
tunnel
key
or
something
in
there,
so
that
you
can
like
ssh
to
it
on
failure
like
I've
seen
some
you
know
kind
of
like
circle,
cis,
ssh
and
stuff?
Like
that?
I
wonder
if
there's
some
steps
that
we
could
eventually
take
that
would
allow
us
to
you
know
let
people
ease
into
at
least
getting
some
visibility
into
the
test
that
they're
trying
to
fix.
E
D
Yeah,
that's
a
good!
That's
good
data
points
there.
So
one
thing
I
was
going
to
mention
about
the
wsl
part:
we
are
currently
targeting
windows
enterprise.
So
the
least
what
we've
seen
is
the
most
common
version
of
windows
that
we'd
see
at
our
corporate
style
customers.
D
It
does
only
support
wsl
one,
but
I've
I've
have
to
go
back
and
validate,
but
I'm
pretty
sure
the
test
should
be
able
to
run
on
that.
One
I'd
be
interested
in
your
thoughts
on
what's
the
right
version
of
windows
to
go
with
for
the
ideal
case,
because
enterprise
you
can
download
almost
any
version
of
windows
10
from
windows
from
microsoft,
except
for
the
version
that
we're
targeting
you
can't
actually
get
1809
without
a
license
and
yeah.
D
Of
course,
the
most
common
one
is
the
one
that
make
you
pay
for,
but
we
could
yeah.
We,
I
think,
that'd
be
interesting
to
see
what
y'all
thinking
about
that.
Also,
as
far
as
the
test
reproducibility,
we
had
done
a
little
bit
of
work
or
in
like
the
ci
failure,
reproducibility
and
the
clarity
of
the
error
messages.
We've
done
a
little
bit
of
work
to
make
the
life
cycle
tests
run
exclusively
in
a
docker
container.
D
However,
you
have
to
but
keep
all
your
pack
dependencies
in
one
container,
it's
likely
to
fail
exactly
the
same
way
in
ci
as
it
is
and
locally
it
doesn't
it
there's
a
lot
of
overhead
that
goes
along
with
with
doing
that,
it's
just
harder
to
make
tests
and
code
and
everything
run
perfectly
inside
of
a
container.
D
But
if
we
feel
like
that,
reproducibility
is
important
enough.
We
do
could
look
into
that
as
one
solution.
I
do
think
now.
What
we've
seen
is
that
whenever
we
make
a
fork
or
open
a
pr
all
that
all
the
windows
tests
do
run
and
yeah
occasionally
the
test
failures
will
be
a
little
opaque
or
window
specific.
But
at
least
you
do
get
that
coverage
for
a
pr.
E
C
I
love
the
idea
of
having
an
ephemeral
development
box
or
you
know
something
that
you
could
point
to
feasibility,
though
I
I
don't
know
exactly
how
we
could
get
that
right
is.
It
seems,
like
you
pointed
at
circle
ci
as
potentially
offering
that
I
know.
E
C
E
E
What
we
need
necessarily-
but
I
just
know
that,
like
github,
you
know
they,
the
ssh
ability
is
something
that
I
think
could
be
useful
so
that
you
could
re-run
things
and
experiment
with
stuff
like
okay.
Well,
the
other
windows
tests
have
this
like
bin
detect
batch
file.
Maybe
I
need
to
create
one
of
those
and
rerun
the
test
and
like
being
able
to
iterate
in
some
way
without
having
to
have
access
to
your
own
windows
box
would
be,
would
be
an
ideal
step.
A
Yeah,
I
think
when
we've
talked
about
this
as
an
internal
team,
our
focus
has
often
come
down
to
the
security
concerns
about
giving
people
ssh
access
to
random
boxes
on
the
internet.
E
C
C
I
mean
for
what
it's
worth
right
now.
I
believe
both
of
the
windows-
runners,
lcal
and
wcal-
are
both
ours
or
managed
by
the
cloud
native
build
project.
So
it's
it's
still
sensitive,
but
it
is
isolated
in
a
sense,
so
the
the
risk
is
very,
very
minimal
yeah
I
mean
I
think
I
like
the
I
like
that.
As
a
feature
again,
I
question
the
feasibility
of
being
able
to
support
that,
as
especially
in
the
short
term,
I
think
long
term,
it's
very
possible
short
term.
D
How
how
would
you
feel
about
putting
more
of
the
runtime
dev
environment
into
a
container
and
with
the
goal
of
if
it
fails
in
ci,
you're,
almost
guaranteed
to
be
able
to
rerun
the
same
container,
maybe
one
that
ci
even
saved
for
you
from
your
failed
run?
If
you
rerun
it,
then
it
fails
locally,
and
I
mean
you
still
need
a
windows
demon
somewhere
to
run
that
against,
but
like
if
the
environment
differences
are
are
able
to
be
captured
in
a
container
like
do
you
feel,
that's
would
be
useful
in
your
dev
workflow.
C
I
want
to
update
my
branch,
make
changes
to
it
and
ultimately
run
the
tests
with
a
very
short
feedback
loop
right,
like
I
would
through
my
id
and
I'm
not
sure
that
having
like
I,
I
don't
see
how
that
would
happen
right
like
how
exactly
do
you
envision
me
updating
the
code
and
being
able
to
run
that
code
within
that
container,
yeah.
D
Something
we
have
in
lifecycle
just
to
put
some
put
of
context
around
it
is
we
have
a
make
task
where
you
can
say,
make
run
windows
essentially
and
then
any
string
you
pass
after
that
gets
run
inside
of
windows
container,
but
using
this
your
current
uncommitted
source
code
from
your
repo,
so
you
essentially
like
if
the
workflow
is
make
run,
run
it
against
the
container
id.
I
saw
that
failed
in
ci
and
all
it's
doing
is
schlepping
your
source
code
into
that
container.
D
C
I,
like
dreamy,
yeah.
No,
I
think
that
seems
really
good
right,
especially
somebody
that,
like
really
just
finds
it
painful
to
have
to
set
up
a
windows
environment
the
downside
is
you
still
need
that
windows
or
yeah
windows
daemon
somewhere,
and
I
think,
that's
maybe
more
at
the
core
of
the
problem
than
anything.
D
There's
a
little
bit
on
that,
so
we
we
do
have
like
a
at
least
with
the
way
our
team
is
set
up.
Is
we
do
have
a
shared
daemon
that
we
can
use
if
we
want
to?
We
can
also
run
on
our
desktop
or
there's
one.
That's
on
our
vpn
right
now.
D
I
was
writing
a
blog
post
along
with
joe
about
a
way
to
harden
a
a
demon
in
the
cloud.
Oh,
I
guess
we're
at
time
too
sorry.
D
D
So
you
know
if
we
had
a
someone
who
we
knew,
we
trusted
to
use
the
demon
say
it's
a
regular
contributor
or
just
someone
who
we
know
that
we'll
throw
away
the
thing
after
this
if
we
gave
them
the
issued
them
a
new
client
cert
with
the
command
line
utility
or
something
like
that,
gave
that
to
them
and
said
you
know,
go
at
it
debug
like
that's.
D
I
feel
like
that's
about
as
good
as
we
could
get
with
this
shared
demon
beyond
having
someone
you
know
on
their
own
gcp
account
stand
up
a
vm
based
on
a
script
that
we
have,
and
then
you
know
use
their
own
vm
to
do
that.
D
C
C
So
two
things
I
think
both
both
of
them
are
very,
very
viable
right.
One
of
them
is
giving
people
a
guide
to
set
up
a
windows
in
a
cloud
provider
right.
I
think
most
people
that
work
in
this
realm
will
at
least
have
one
of
the
cloud
providers
and
so
giving
them
the
option
to
choose
their
provider
right,
give
them
a
very
quick
guide
and
then,
most
importantly,
give
them
like
hey.
This
is
the
script.
This
is
the
thing
that
you
execute
and
it
just
sets
up
this.
C
You
know
very
quick
setup
for
you
to
be
able
to
do
this
and
then
how
that
ties
into
tls
and
making
that
secure
right,
making
that
as
easy
as
possible
seems
very
viable
and
and
very
useful.
For
that
experience
of
hey,
I
found
an
issue.
What
do
I
do
next?
We
could
point
them
to
this
guide
of,
like
hey,
set
up
your
windows
machine
on
a
cloud
provider.
I
think
that's
a
really
good
way
to
go
about
it.
C
Creating
the
shared
environment
where
we
could
provide.
You
know,
certs
question
is:
can
we
revoke
those
certs?
Is
that
a
possible
operation
or
rotate
them
or
I'd
have
to
dig
in
trusting
people
to
do
that?
For
you.
D
Well,
the
the
more
clever
way
I
figured
out
how
to
do
this.
You
actually
just
stand
up
a
docker
container,
that
is
your
cert
go
between
a
docker
container
that
speaks
socket
and,
or
you
know,
inputs,
tls
and
outputs
to
this
docker
demon
socket.
I
actually
made
a
little
proof
of
concept
that
does
that.
So
in
that
case,
if
you
want
to
rotate
a
cert,
you
could
just
have
one
shared
cert
that
everybody
uses
and
then
expire.
The
whole
thing
you
know
recreate
the
tls
termination
container,
recreate
the
client
certs.
D
However,
you
want
to
do
that.
If
you
just
want
to
keep
it
super
simple
and
brute
force,
you
could
do
it
that
way.
If
you
want
to
get
find
grain
and
be
a
ca
yourself
and
maintain
a
certificate
revocation
list
and
issue
client
search
individually
for
everybody,
you
can
do
that
too,
with
the
same
same
model,
so
it
would
be.
The
I'd
have
to
dig
into
exactly
how
to
do
that,
but
I
know
that
there
is
a
way
to
do.
D
You
know
mutual
tls
with
certificate
revocation,
but
the
simpler,
or
at
least
the
you
know,
the
dumb
simple
way
of
doing
it
is
just
recreate
your
server
cert.
Every
time
you
want
to
expire,
someone.
C
Yeah,
well,
that
makes
a
lot
of
sense.
I
don't
know
that
that's
ultimately
necessary
from
the
get-go.
I
think.
If
we
were
to
envision
sharing
a
a
resource
right,
it
would
be
very
similar
to
what
the
project
currently
does
where
we
do.
Share
resources
right,
like
gcp,
account
information
all
that
stuff
and
it's
to
project
contributors
that
have
then
identified
themselves
right,
and
then
we
at
that
point
kind
of
trust
them
at
a
certain
degree
enough
to
give
them
that
information.
C
So
I
think
a
very
similar
concept
here
is:
we
could
set
something
like
that
up
and
once
they
get
that
level
of
project
contributor,
where
we're
already
providing
them.
You
know
infrastructure
information.
We
could
very
easily
provide
them
with
this
and
and
have
that
level
of
trust
there.
So
we
wouldn't
be
giving
it
to
any.
You
know
random
community
contributor,
but
most
likely
to
one
of
those
that
has
already
contributed
and
gone
up,
that
tier.
D
C
Dan
does
that
give
you
enough
to
go
off
for
now
cool
all
right,
yeah!
That's.
A
C
D
Then
I
feel
like
the
packer
scripts,
that
anthony
started
feel
like
the
closest
thing
to
that.
Let
me
see
if
I
can
find
a
way
to
to
share
those
with
the
community
and
yeah
and
go
that
route
if
that
makes
awesome,
that'd
be
great
yeah
cool
thanks
everybody.
Thank
you.