►
From YouTube: Working Group: 2020-07-09
Description
Greenhouse
A
A
A
A
A
A
A
A
B
B
B
A
A
A
A
A
C
C
D
D
Yeah,
if
any
of
you
gonna
say
hi
but
I
will
make
you
so
I
guess
we
can,
we
can
just
get
into
it.
I
guess
we've
been,
we've
been
using
build
packs
for
I,
don't
know.
Maybe
a
year
now.
D
I
just
want
to
give
a
little
context
on
kind
of
why
we
like
how
we,
how
we
arrived
at
this,
like
how
we
came
to
to
use,
build
packs
in
the
first
place,
and
we
wanted
out
of
that
so
and
just
for
some,
like
really
big
context,
we
we
kind
of
have
a
whole
like
platform
that
we
that
we
maintain
for
product
developers
and
like
Heroku.
We
need
to
build
application
images
to
run
the
code
that
our
applique,
that
our
that
our
product
developers
are
ready
and
we
started
doing.
D
After
a
little,
while
of
doing
that,
we
we
had.
We
had
a
bunch
of
Ruby
images,
so
we
kind
of
wanted
to
not
duplicate
so
much
between
all
these
very
similar
Ruby
applications.
So
we
extracted
out
a
base
image,
a
ruby
base
image
and
had
every
all
of
our
Ruby
applications
build
and
reference
to
that.
But
that
proliferated
a
lot
because
then
we
had
a
lot
of
Ruby
versions
to
support
another
problem.
D
D
Conversions
of
the
base
image
for
the
room
so
going
along
with
this
kind
of
build
big
bull
box
that
we
had
sitting
around
another
issue
is
with
it
was
that
you
know
we
were
you're
running
docker,
build
on
it,
which
requires
a
privileged
access,
and
that's
that's
not
a
great
thing,
we're
also
in
a
in
a
docker
in
soccer
situation,
which
is
also
pretty
generally
undesirable.
I
guess,
unless
you're
developing
with
docker
I
mean
it
can
work,
but
you
can
get
into
some
trouble
as
as
we
can
tell
you
about.
D
We
also
had
a
few
other
kind
of
applications
floating
around.
You
know
and
maybe
other
programming
ribbon
written
in
other
programming
languages,
and
we
still
had
a
lot
of
you
know
even
taking
away
the
Ruby
specific
stuff.
There
was
a
lot
of
duplication
with
that
with
like
specific
things
that
we
needed
to
do
inside
of
the
images
and
while
building
images
so
that
the
images
could
run
and
be
understood
by
by
our
deployment
platform.
D
B
D
Were
familiar
with
Heroku
as
well,
so
we
had
that
experience
ourselves
and
we
kind
of
also
wanted
one
of
the
other
benefits
of
taking
that
upon,
taking
that
our
signet
upon
ourselves,
like
uniformity,
so
we
wanted,
like
uniformity
and
consistency
across
these
images
without,
like
all
this
duplication,
we
also
wanted
better
docker
images.
Our
docker
images
were
pretty
big.
They,
like
included
roman
time
dependency,
build
time
dependencies,
and
why
was
that?
D
Well,
at
the
time
we
I
don't
think
multistage
builds
were
available,
maybe
with
multistage
builds
today
you
could
work
around
this,
but
at
the
time
you
know
that.
Wasn't
that
wasn't
an
option
for
us
so,
but
but
we
wanted
to
keep
all
the
layers
you
know
so
that
we
could
have
some
some
caching
involved
in
our
darker
builds,
and
so
we
didn't
want
to
flatten
our
images.
D
So
yeah
with
all
that
I
mean
we
explored
some
options:
I,
actually
don't
I
wish
I,
don't
remember
what
else
we
do
this
with
you
with
that,
but
I,
don't
I
didn't
look
at
it
myself,
naked
and
well.
We
hit
up
on
build
packs
and
have
been
using
that
for
about
a
year
so
currently,
like
the
state
of
the
art
for
us,
is
we
have.
We
run
everything
on
kubernetes
pretty
much
or
at
least
all
the
things
I.
Might
that
my
team?
Does
we
go
along
for
the
ladies
and
part
of
our
kubernetes
clusters?
D
Are
our
kubernetes
cluster
to
set
up
with
our
bow,
which
is
a
kind
of
modern,
CI
CD
tool
for
specifically
built
for
kubernetes,
so
our
docker?
So
our
sorry,
not
our
docker
builds,
but
our
image
builds
run
as
workflows
on
kubernetes,
where
you
know
it's
just
a
bunch
of
stages
looks
pretty
much
like
the
build
phases
that
thought
that
you'd
run
anywhere.
You
know,
analyze,
restore
detect,
analyze,
restore
build
export
all
that
those
are
running
like
their
own
containers
in
the
margo
workflow.
D
This
is
kind
of
a
legacy
thing
we'd
like
to
get
away
from
this
and
move
towards
using
Heroku
stack,
but
we
haven't
yet
that's
that's
it
to
do.
Well,
you
know,
maintaining
our
own
stack
doesn't
really
guarantee
compatibility.
We
like
up
sharing,
build
packs
that
we
would
that
we
might
like
to
use.
So
that's
one
benefit.
That
would
be
one
benefit
of
doing
that,
but
our
stack
is
pretty
similar.
It's
like
it's
nothing!
It's
nothing
that
nothing
bad
and
as
for
build
packs,
the
build
packs
are
stack
supports
are
all
written
by
us.
D
D
We
are
not
at
the
point
of
using
any
option,
build
packs
like
I
said,
because,
because
we
have
our,
we
have
one
minute
legacies
that
most
of
our
build
packs
are
pretty
pretty
simple.
If
we
don't
we
don't
we
don't
do
anything
like
I,
don't
know
what
I
would
call
like
a
meta
build
pack
when
I
can
one
build
pack
is
just
defined
in
terms
of
the
other
build
packs
that,
like
you,
want
to
drag
into
the
build
process,
we
don't
do
anything
anything
offensively.
D
So,
to
give
an
example,
we
have
our
Ruby
build
pack
which
installs
the
Ruby
runtime
does
bundle,
install
configures
the
Ruby
environment
and
some
some
kind
of
ways
that
we
like
to
in
some
ways
that
we
like-
and
this
is
all
done
by
one
bill
pack
rather
than
like-
maybe
having
a
Ruby
runtime
build
pack
and
a
separate
bill
pack
for
me,
or
rather
even
like
a
root
of
an
m-pam
max
Ruby
interpreter
build
pack.
Instead
of
like
a
movie
build
back
on,
we
only
use
one
kind
of
interesting
thing
about
this.
D
Build
pack
is
that
it
supports
like
having
a
secondary
gem
file
and
secondary
bundle
all
right,
because
our
app
developers
decided
that
discovered
that
this
was
useful
for
doing
large,
large-scale
upgrades,
like
like
say,
upgrading,
rails
versions
in
a
project
that
has
many
many
dependencies
so
yeah.
We
have
this
nice
little
secondary
gem
file
and
then
they
support
and
really
all
of
all
of
our
other
build
packs
are
kind
of
oriented
like
that,
not
not
all
of
them.
D
D
And
probably
our
most
interesting
build
pack
is
our
app
to
build
pack
which
we
use
to
install
like
arbitrary
apt
packages
during
the
build
phase.
So
an
application
can
just
say:
I
want
all
these
packages
like
any
any
kind
of
app
packages
and
in,
like
usual,
like
apps,
command-line,
syntax
of
like
name
and
versions,
and
the
book
pack
will
go
and
try
to
install
those
it's
kind
of
neat
and
hacky.
It
uses
app
to
get
the
the
deep
files
and
then
it
uses
D
package
to
extract
them
to
a
layer
and
and
usually
everything
works.
D
But
not
every
package
expects
to
be
to
be
handled.
That
way.
So
it
doesn't
know
everything
and-
and
we
thought
with
all
that,
with
all
these
in-house
we
build
packs-
we've
been
able
to
to
move
all
of
our
applications
over
to
build
packs,
except
for
one
which
has
pretty
complex
runtime
dependencies.
It
is
it
it
is
a
Python
app
that
uses
a
Python
library
called
theano
to
I,
don't
know
what
it
uses,
but
Theano
wants
to
be
on
basically
takes
Python
code
and
and
compiles.
D
It
says
it's
like
like
like
translates
it
to
C
and
compiles
that
at
runtime,
so
it
needs
g
plus
plus
at
one
time
and
satisfying
that
was
really
hard.
So
we
haven't,
we
haven't
done
that
yet,
but
everything
else
I'm,
pretty
smooth,
I
guess
another
interesting
thing
about
our
about.
The
way
we
use
Ville
packs
is
that
we
have
a
file,
so
our
platform
is
called
de
jokey
like
Heroku,
and
we
have
a
file
called
the
dojo
ku,
manifest,
which
you
know
all
of
our
application.
D
A
You
mentioned
you
use
our
go
to
do
cloud
unit,
build
PEC
builds.
Do
you
have
any
examples
of
that
or
tooling,
or
could
speak
a
little
bit
more
about
what
that
kind
of
looks
like
on
your
infrastructure.
D
D
To
represent,
like
builds
at
a
higher
level,
but
the
builds
themselves.
You
know
that
when
the
controller
sees
a
build
resource,
that's
you
know,
that's
pending,
then
it
goes
and
creates
an
AR
go,
and
let
me
take
a
look
at
that,
but
while
I'm
pulling
it
up
the
high
level
is
it
has
to
do
a
git
pull
to
clone
the
the
app
repo
from
github.
All
their
stuff
is
loose
and
you
know
so
they
close
it
from
there
and.
D
D
Right
from
there,
you
can
go
into
all
the
lifecycle,
build
phases,
I
believe
we
pass.
We
pass
so
so
because
these
these
bill
phases
are
all
taking
place
in
their
own
containers.
We
have
to
pass
around
like
the
layers
directory
and
even
the
application
directory
application,
repo
directory
as
argyll
artifacts
volumes
as
well.
A
D
Yeah,
it
builds
and
it
builds
yeah.
It
builds
it
using
the
latest
stuff
for
me
and
and
our
stuff
we're
not
we're
not
really
checking
for
any
upstream
stuff.
We
have
our
builder
image
and
the
other
overflow
just
grabs
that
and
those
the
both
our
builder
image
also
has
to
define
all
the
things
that
it
expects
to
see
is
like
a
ruby,
a
build
pack
and
maybe
it's
got
rails
assets,
and
maybe
it
needs
no
js'
as
part
of
that
or
you
know
like
a
go
like
application,
which
has
which
will
work?
D
All
the
spectrum
is
flow
through
the
build
part
of
the
pipeline.
We
haven't
had
much
of
a
you
stays
for
rebasing,
because
you
know
we're
building
all
the
time.
You
know
we
do
multiple
deployments
sort
of
main
application
every
day,
so
yeah.
We
need
to
know
how
to
change
to
the
to
the
Builder
or
the
source
of
a
stock.
Rather
that
change
can
be,
can
make
it
a
production
very
quickly.
D
A
The
VMware
side,
we
wanted
to
create
a
experience
where
you
know
user
doesn't
have
to
think
about
updates
they're
flowing
out
to
their
app.
It's
like
very
declarative,
like
here's,
the
source
code,
and
then
you
know
it'll
pick
up
the
latest,
build
pack
and
rebuild
you
know,
and
that's
really
just
pick
up
latest
acts
rebuild
it
that's
released,
so
we
we
put
together
some
CRTs
and
controllers
and
an
open
source
project
called
capac.
A
That's
really
focused
on
that
sort
of
upstream
monitoring.
I
was
curious
about
you
know.
It
seems
like
you
build
more
of
an
imperative,
workflow
and-
and
you
know,
your
users
are
more
interested
in
that.
It's
like
curious.
If
you
looked
at
K
pack
or
you
know
like
what
your
motivations
for
using
are
going
the
way
you
used
over.
B
Weave
the
way
that
we're
building
images
and
tagging
images
we're
not
ever
really
updating
a
latest
tag
or
a
particular
image,
we're
always
going
to
tag
our
images
with
git
commit
sha.
So
this
sort
of
speaks
to
the
rebase
question
as
well.
That's
why
we
almost
why
we
don't
use
the
rebase,
because
we're
always
going
to
build
a
new
image
for
new
commits.
D
B
D
D
A
A
E
Like
a
workflow,
that
I've
seen
with
cloud
native
bill,
passes
harder
to
use
a
single
mutable
tag
to
like
track
the
identity
of
the
application
and
then
also
apply
like
immutable
text
like
a
get
shot
to
each
image
that
you're,
generating
and
I
was
wondering.
Do
you
guys
do
that,
like
even
if
you're
not
consuming
latest?
Do
you
have
like
immutable
tags?
That
sort
of
like
give
you
the
but
like
the
head
of
a
branch
for
an
analogy
there.
B
C
Had
a
question
around
your
manifest
file
like
I
some
of
the
stuff
you
talked
about,
we
definitely
don't
support
and
project
Tom,
but
did
you
look
at
project
on
role
when
you
were
kind
of
designing
the
the
manifest
file
that
you
have
for
kind
of
configuring,
build
and
build
packs
and
other
things
and
I
guess
from
a
product
perspective,
definitely
interested
in
things?
We
can
do
to
upstream
stuff
to
kind
of
support
your
use
case
as
well.
I.
D
D
E
E
It's
interesting,
cuz,
like
one
of
the
the
goals
of
project
Tamil,
was
that
a
bunch
of
platforms
that
all
run
build
packs
could
sort
of
unify
like
under
an
interface
where
an
app
could
run
on
any
of
those
platforms
and
still
have
it
still
using
the
same
file.
They
wonder
if,
in
the
wild
it's
turning
out
that
that
is
not
particularly
realistic
use
case,
it
seems,
like
everybody,
has
platform
specific
features
they
want
to
layer
on
top
and
not
many
apps
actually
would
get
pushed
to
any
arbitrary
one
of
these
things.
C
A
Have
a
question
to
kind
of
fill
in
here
a
bit
so
from
a
developer's
perspective
like
how
much
knowledge
do
they
know
of
what's
happening
behind
the
scenes
in
regards
to
build
packs
right,
because
it
seems
like
there's
a
certain
team,
that's
creating
build
packs
and
kind
of
setting
up
the
infrastructure
that
manages
all
that
and
then
the
app
developers
are
more
or
less
doing
like
product
specific
applications.
So
what
does
the
overlap?
There
look
like
if,
at
all,.
D
D
C
A
question
along
those
lines
of
what
Xavier
is
asking,
but
probably
more
of
like
pre-built
packs,
I
guess
like
when
you
were
doing
stuff
with
docker
file.
Did
the
app
developers
take
on
owning?
Like
writing.
Those
doctor
files
was
that
something
like
a
the
team,
that's
kind
of
owning
the
bill.
Packs
and
stuff
today
did
day
like
kind
of
scaffold
and
set
that
up
or
kind
of
like
what
were
what
were
just
like.
The
team
composition,
like
errors,
responsibilities
for
kind
of
managing
that
yeah.
D
They
were
those
Locker
files
are
mostly
written
in
the
same
by
the
team
that
exists
today,
which
maintains
all
this
flow
facts,
but
it
was
in
the
app
Rico.
So
app
developers
could
also
touch
that
of
anytime
day.
I
don't
know.
Is
there
anything
else
more
specific,
but
but
I
mean
you
know.
People
remember
about
together.
B
D
B
D
B
B
Still
a
little
bit
of
that,
we
still
provide
a
couple
of
little
hooks,
so
the
big
one
would
be
pre
deploy.
So
we
will
run
basically
an
arbitrary
shell
script,
but
during
deployments
that
runs
like
before
the
rest
of
your
workers,
rollout
for
things
like
database,
migrations
or
a
set
of
uploads
and
that
kind
of
stuff.
D
Okay,
so
here
I
am
we
have
this
this
little
app
that
we
use
to
like
exercise
our
platform
which,
like
I,
said
it
called
the
joke
ooh.
So
that's
why
it's
called
the
jerk
who
sample
app-
and
you
know
you
see
this
digit
who
build
command
I'm,
just
making
a
new
commit
I.
Think
I'm,
assuming
I've
already
built
this
one,
but
I've
already
built
an
image
for
this
Shah
successfully
and
our
we
don't
like
some
mutated
image
sides,
except
for
latest,
where
we
have
to
to
take
advantage
of
caching.
D
We
don't
we
don't
like
to
mutate
image
that,
especially
where
shots
are
concerned
and
for
images
that
might
end
up
in
production.
We
can
tell
you,
like
scary
stories
of
what
happens
when
you
use
a
mutable
image
tags,
but
suffice
to
say
we
don't
do
that
so
I'm
making
a
new
commit
and
I'm
going
to
push
it
up
to
github,
and
then
that
should
be
all
we
need
to
to
do
this
way.
So,
let's
go
to
shot.
I
can
also
make
this
a
little
bigger.
I,
don't
know
how
big
it
needs
to
be.
D
So
I
have
this
other,
so
it
opens
down
here
because
no,
oh
we've
renamed
it.
My
bad
okay!
So
now
it'll
work,
I
promise
I
have
this
other
this
other
terminal
up
and
down
here,
because
you
know
in
the
top
one
will
be
able
to
see
the
output
from
the
from
the
de
jure
qu
CLI
and
in
the
bottom.
One
I
can
show
you
that
that
we're
making
resources
think
urban
ideas
that
are
associated
few
things.
D
D
C
Had
a
question
around
your
talking
about
that
pill,
pack
I
think
police,
definitely
on
the
furrow,
CO
and
I
believe
on
the
pillow
side.
We
also
have
experience
with
doing
Advil,
packs
and
kind
of
the
drawbacks
or
kind
of
things
you
have
taken
account
for
at
packages
that
don't
work
out
of
the
box.
There's
been
a
lot
of
active
discussion,
I
think
with
the
OS
image
extension
thing
that
Steven
wrote
and
then
time
the
root
bill,
packs
and
I.
Guess
we
call
them
snack
packs,
tackle
box
things
that
Joe
has
been
working
on.
C
Have
you
had
a
chance
to
look
at
any
of
those
like
I?
Don't
have
any
working
implementations
of
stuff
but
I
think
we're
definitely
really
interested
in
curious
from
people
kind
of
outside
of
these
two
groups
of.
Does
this
solve
help
solve
some
the
needs
that
you're
Chloe
trying
to
solve
today?
That's
a
little
more
hacky,
like
you
said,.
A
Add
to
that
you
know,
Cloud
Foundry
had
an
app
built
pack
that
would
do
that
unpacking
of
packages.
You
know
into
individual
directories
that
we
ran
for
a
couple
of
years,
but
we
never
felt
comfortable
enough
telling
people
use
in
production,
because
too
many
packages
would
behave.
Kind
of
unexpectedly
so
also
really
interested
to
hear
about.
If
you've
looked
into
some
of
the
apt
RFC's.
E
D
D
D
D
It's
it's
pretty
simple
it
just
it
just
tells
it.
It
just
has
information
about
what
repo
this
is
come.
This
is
concerning
and
what
git
commit
Shaw
we
want
to
build
for
and
and
also
the
status
of
that
build
like
if
it's
running
or
if
it
succeeded
or
failed
for
whatever
reason.
So
we
have
a
controller
that
what
that
looks
at
those
build
resources
and
makes
a
workflow
in
response
to
them.
D
Did
that
answer
your
question
did
okay.
So
what
happened
here?
Everything
went
by
as
Matt
and
I
were
talking,
so
we've
got
our
Ruby
build
pack.
This
is
I.
Don't
know,
I
tried
to
model
the
output
after
what
I
was
seeing
from
pack
itself,
so
that
layers
of
an
indentation
to
communicate
segments
of
the
of
the
build
time?
D
And
that's
really
all
this
app,
does
it
all
it
pretty
much
only
needs
rubies.
So
so
that's
all
we
do,
and
then
this
this
one
down
here
is
like
our
is
a
what
is
it
it's
setting
everything
up
so
that
so
we
can
use
a
CMV
process,
type
that
environment
variable
so
to
just
pass
that
in
Bart,
like
environment
and
then
have
the
the
container
do
whatever
its
job
happens,
to
be.
D
Okay,
and
now
we
have
down
here
as
well,
the
custom
resource
has
faith
has
a
phase
column,
the
output
has
a
phase
column
and
it
succeeded,
whereas
before
it
was
fun
cool,
maybe
it's
a
like
around
three
minutes
or
something
like
that.
D
So
there's
kind
of
a
like
generic
things
that
we
want
to
know
like
what
is
the
name
of
this
application
when,
when
we
were
migrating
to
build
packs,
this
was
useful
I
think
we
can
actually
delete
this
now.
So
you
should
here
is
the
components
key,
which
defines
all
those
things
that
get
a,
and
this
is
a
part
that's
analogous
to
the
profile.
All
these
command
attributes
are
what
we
use
to.
We
translate
this
into
the
format
so
that
we
can
use
it
with
a
CMV
process.
C
D
D
D
E
D
Like
if
one
of
the
application
teams
is
working
on
a
you
know,
a
ruby
runtime
upgrade,
you
know
from
like
some
minor
version
to
another
minor
version,
while
they're
developing
that
and
deploying
it
and
making
the
images
the
cash
gets
thrashed
because,
like
someone
will
build
with
like
Ruby
to
6x
and
someone
builds
with
to
7
y
and
every
time
you
alternate
that
the
the
whole
Ruby
cash
gets
gets,
invalidated
invalidated,
so
I
don't
know.
If
we
can
we
can.
D
A
D
C
D
I,
don't
I,
don't
know
that
that
was
what
I
meant
I
mean
more
like
when
we
you
know,
if
we're
going
along
and
doing
you
know
to
6/5,
build
after
two
six
five
build
and
no
one
changes
any
gems
like
you
know
the
whole
Ruby
the
whole
Ruby
build.
Is
you
it's
all
using
cash
results,
but
one?
But
since
someone
comes
along
and
does
like
271,
the
271
image
will
become
latest
and
then
the
next
two
six
two
six
five
build
that
someone
tries
to
do,
won't
won't
be
able
to
use
any.
D
You
know
any
cash
you
know
like
we
could.
We
could
make
this
a
little
smarter
by
like
knowing
oh.
This
is
a
ruby
you
know
by
by
instead
of
pushing
latest,
we
could
push
like
latest
two
six
five
and
then
you
know
when
we,
when
we
do
at
six
six
five
build.
We
have
to
reference
that
as
the
as
the
image
that
we're
building
against
and
the
image
that
we're
gonna
export
later,
but
we
don't
we
just
we
just
do-
is.
C
C
B
C
If
this
is
a
problem
having
a
you
could
in
theory
like
not
release
not
for
the
runtime
but
for
the
gems
Lee's,
not
clear
that
cash
or
make
sure
to
only
clear
it
like
every
I,
don't
know
like
three
API
versions
or
something.
If
that's
like
a
big
pain
point
for
you,
I
know
it's
not
like
I've
seen
it
be
specific
fix,
but
it
is
a
thing
you
could
do
in
your
belt
pack,
I
think
to
help
mitigate
that
pain.
Point.
E
One
thing
we
were
talking
about
yesterday,
mostly
in
the
context
of
pack,
is
sort
of
how
we
use
the
image
tag
like
a
single
mutable
image
tag,
just
sort
of
standing
for
image
identity
right.
So,
like
you're
saying,
if
you
had
two
different
mutable
tags
for
these
different
things,
you
could
get
around
the
thrashing,
but
I
wonder
if
we
need
to
if
that
interface
is
very
unintuitive,
it's
like
not
something
that
people.
E
A
However,
comment
on
something
that
was
said
earlier,
not
really
a
question
you
mentioned
wanting
to
move
towards
a
stack
ID
that
would
work
with
you,
no
more
bill
packs
or
a
stack
images.
The
little
packs
we
did
a
while
ago
unify
on
a
stack
ID
for
everything.
That's
blue
to
Bionic,
based
so
that
you
can
have
your
own
stack
and
other
build
packs
will
work
on
it
that
I
linked
the
description
of
what
that
looks
like
there.
A
D
B
E
Thing
they've
recently
approved
as
an
RFC,
is
that
now
platforms
will
be
able
to
set
like
a
c
mb
platform,
api
environment
variable,
which
will
then
configure
sort
of
the
version
of
the
interface
using
for
the
lifecycle.
So
one
thing
I'd
be
interested
is
like
once.
The
next
version
of
the
life
cycle
comes
out.
That
supports
that,
if
you
find
it
easier
to
update
because
then
you
can
keep
pulling
in
new
ones.
E
D
A
A
question
about
your
Abdel
pack:
have
you
run
into
problems
with
the
strategy
of
Ehrlich
and,
if
you've
run
into
problems?
What
what
particular
things
have
been
painful
about
your
strategy
of
extracting
out
packages
into
later
directories
with
different
absolute
paths?
I.
B
B
It
is
really
only
one
app
uses
it,
but
we
have
this
app.
That's
doing
document
conversion
using
LibreOffice,
which
is
finicky
about
you
know
where
its
files
are
and
I
think
we
compiled
it.
We
had
to
basically
compile
the
package
and
you
know
we
tore
it
up
and
stick
it
on
s3
somewhere
and
then
all
the
custom
packages
built
bill
pack
does
is
look
for
that
package.
B
A
Really
great
to
get
your
feedback
on
the
application
mix
and
stuff,
because
I
think
this
has
been
like
a
large
thing
for
a
long
time
that
people
wanted
a
solution
to
so
it's
I
was
curious.
It
seems,
like
you've,
run
into
some
very
similar
things
that
other
people
have.
You
know
I
think
Heroku
had
an
apple
tag.
Those
kinda
like
that
for
a
while,
you
know
I
found
read
it'd,
be
really
great
to
find
something
that
solves
a
problem.
C
Yeah
I
guess
someone
on
that
note:
is
there
a
good
way
or
what
is
the
best
way
to
kind
of
engage
you
and
kind
of
a
team
like
it's
really
insightful
like
these
sessions
are
really
insightful
and
valuable.
So
thank
you
for
doing
it
as
we're
coming
up
on
closing
us.
I
really
appreciate
all
of
you
taking
the
time
to
do
this,
but
is
there
a
good
way
to
kind
of
engage
you
on
certain
RFC's
that
we
think
we
would
love
your
input
more
on,
so
you?
C
B
I,
don't
know
tag
us
on
github
I
guess
we
could
relay
some.
Some
of
our
github
handles
I
know.
I
was
on
the
slack
or
the
bill
PAC
slack
for
a
while
until
I
reinstalled
things
on
my
computer
regionally,
but
I'd
get
back
in
there.
Paul
I
know
you're
on
the
bill
back
slack
right,
yeah,
regularly,
yeah.
A
All
right,
I
think
we're
we're
about
at
time.
This
was
this
was
really
great
and
I
think
a
lot
of
the
feedback
that
you
can
provide
it
here.
Well,
let
us
figure
out
how
to
include
you
more
and
things
that
you're
interested
in
on
github.
So
you
know
I'm,
you
know
excited
about
collaborating
in
the
future
if
you
guys
have
the
bandwidth
to
do
that.