►
From YouTube: CNB Sub-Team Sync: BAT - 11 Feb 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Let's
get
started
just
reminder
that
there
is
a
dog
attached
to
this
meeting.
Please
sign
in
attending.
A
This
is
your
moment
to
give
us
the
official
intro
good.
B
Morning,
yeah,
so
I'm
jonathan
mcallister,
you
can
call
me
johnny
and
I
recently
joined
heroku
salesforce
managing
the
languages
team,
which
is
using.
B
I
mean
we
have
the
kind
of
legacy,
heroku,
build
packs
and
now
we're
migrating
to
to
the
cmbs
and
getting
no
terence,
quite
a
bit
who's
involved
with
this
project,
and
he
advocated
for
someone
from
our
group
to
participate
in
this.
So
I
I'm
here
I
am.
C
And
I
think
that's
pretty
much
it
everyone
else,
no,
but
I
think
it
might
be
nice
to
just
go
ahead
and
introduce
ourselves
to
johnny.
So
I
am
sam.
I
work
at
bloomberg
in
the
ml
platform
team.
I'm
one
of
the
maintainers
of
the
me.
A
I
can
go
next,
I'm
emily.
I
work
at
vmware,
I'm
on
the
core
team
of
the
cmd
project
with
terence
and
joe.
This
is
one
of
the
sub
teams
I
like
to
be
involved
with.
C
King
status
updates-
I
know
aidan
isn't
here
today,
but
he
asked
me
to
pass
on
a
few
things.
He's
been
working
on:
one
updating,
our
buildback,
also
skype
to
api
07
and
two.
In
the
last
learning
team
meeting
we
were
discussing
the
lack
of
any
go
documentation
around
our
bindings,
so
I
think
in
will
probably
be
helping
out
with
that
as
well,
so
just
to
make
sure
that
apart
from
bash,
we
also
have
good
documentation
for
the
only
binding
we
support
officially,
which
is
cool.
A
B
I
don't
know
for
what
it's
worth
like
being
new.
I
don't
know
what
is
worthy
of
what
is
share
worthy
and
what
isn't
so,
forgive
me,
but
I
think
you
guys
are
aware
that
we're
kind
of
building
a
rust
implementation
of
libcmb
and
that's
going
pretty
well
so
we're
starting
to
roll
out,
at
least
in
local
prototype
form.
B
More.
You
know,
coverage
for
more
languages
and
the
project
is,
is
accelerating
a
little
bit
in.
A
B
Next
quarter
so
I'll
be
eager
to
share
what
comes
out
of
that,
and
I
actually
built
one
myself
for
a
swift,
vapor
service
side,
swift,
vapor
api,
which
was
enlightening.
I
knew
that
the
whole
build
packs
concept
really
so
for
what
it's
worth,
it
seemed
to
function
as
expected.
E
Is
there
a
link
to
the
work
that
you're
doing
that
you
could
share.
B
B
I'm
learning
about
that
choice
and
it
had
to
do
with
some.
You
know
personal
preferences,
as
I
understand
as
well,
as
you
know,
cross
platform
support
to
support
on
windows
and
beyond
that
I
don't
have
a
maybe
more
thorough
answer.
I
can,
if
you
give
me
a
few
minutes
here,
I
can
maybe
pull
up
some
more
detail
on
that
and
give
you
some
more
info.
D
On
like
a
different
topic
this
week,
there's
like
at
least
one,
but
maybe
two
people
who
pulled
up
to
the
kettle
slack
wanting
to
know
about
like
creating
build
packs
to
install
dependencies
that
they
were
interested
in
and
we
directed
them
to
the
build
packs,
io
documentation
that
sort
of
bare
bones
how
to
create
a
build
pack
and
they
came
back
with
the
feedback
that
or
they
left
with
the
impression
that
that
guide
was
specific
to
ruby,
which
maybe
is
feedback
that
you've
heard
before.
D
But
I
just
want
to
surface
like
yeah.
That's
something
that
we
heard,
which
you
know
for
me
was
surprising,
because
I
already
know
what's
going
on
in
those
docs
but
yeah.
I
guess
perhaps,
if
you
don't
already
know,
then
it
seems
that
way.
So
if
you
know
if
those
docs
could
be
improved
to
sort
of
point
out
like
and
here's
where
you
know,
this
would
be
not
ruby,
specific
or
whatever.
I
think
that
would
that
small
change
would
provide
a
lot
of
value.
A
A
C
Just
I
was
actually
planning
on
talking
about
this
in
our
last
learning
team
meeting
we
dedicated
the
entire
or
to
will
pack
our
buildback,
author's
gate
and
just
shortcomings
like
we
have
a
similar
situation
in
bloomberg.
We
have
lots
of
new
buildback
authors
and
they
go
back
to
the
the
pullbacks
io
docks
and
they
they
don't
find
much
there
like.
C
There
are
lots
of
things
missing.
There
also
lots
of
concepts
which
are
difficult
to
grasp
from
the
current
set
of
tutorials,
like
there's,
no
information
on
how
interval
by
communication
works
like
it's.
Just
a
single
standalone
build
pack
that
has
it
uses
its
own
provision,
so
you
never
really
get
into
the
depths
of
how
provisions
and
requirements
and
resolution
of
those
requirements
work
like
there's
just
a
single
line
that
magically
picks
out
the
ruby
version
from,
but
it
never
really
talks
about
how
to
like.
C
How
do
you
deal
with
version
resolution
or
like
how,
when
you
have
multiple
requirements?
How
do
you
resolve
that?
None
of
that
is
mentioned
anywhere
or
any
example
of
that
is
given?
There's
no
there's
no
documentation
on
setting
environment
variables
whatsoever,
which
is
which
is
huge.
I
think
so.
We
we
spent
most
of
the
last
like
the
entire
hour
in
the
last
learning
team
meeting
figuring
out
how
we
can
improve
the
buildback,
author's
documentation
and
like
agents
got
in
like
the
prc,
is
working
on
or
like
slowly
trying
to
address
those.
C
We
also
talked
about
the
fact
that
currently
there's
this
just
one
tutorial
for
buildback
authors,
where,
like
it's
one
long
thing,
it
doesn't
tell
you
how
to
do
like
very
specific
tasks
like
set
environment
variables,
create
the
start,
script
or
entry
point,
but
like
it,
you
have
to
follow
the
whole
thing.
C
In
the
context
of
that
ruby
example
to
you
know
how
to
do
anything
and
as
opposed
to
the
app
developers
reference
which
is
like
it
tells
you
how
to
set
like
how
to
invoke
an
entry
point,
how
to
like
use
a
project
descriptor
or
something
like
that.
So
like
breaking
down
the
buildback
authors,
documentation
into
like
similar
how
tools
and
then
having
some
sort
of
a
language
switcher
where
still
use
bash,
because
it's
easier
to
understand
for
most
people
than
like
go,
but
I
think
it's
really
hard
to
write
production
buildbacks
in
bash.
C
So
I
think
the
idea
was
to
divide
this
out
and
then
have
like
a
switcher
like
a
tab
switch
or
something
you
can
choose
your
language,
so
like
default
is
bash.
Just
so
you
understand
what's
happening.
If
you
want
to
write,
if
you
want
to
use
lip
cnb,
for
example,
you
can
switch
over
to
go.
C
The
other
feedback
that
came
up
was
like
the
current
version
of
lip.
Cmd
has
some
interesting
interfaces
that
we're
trying
to
get
rid
of
in
lipscomb
too.
So
we
also
had
some
discussion
on
like
when
is
the
appropriate
time
to
put
code
documentation
on
on
the
website?
Is
it
right
now,
yeah
or
after
we
have
lip
synthetic?
A
C
C
I
think
we've
been
talking
about
it
for
a
while,
but
I
think
we
should
we
should
start
coming
up
with
something
concrete.
We
also
talked
about
like
alternative
glow
implementations.
So
like
we
talked
about
common
utilities,
you
might
need
and
then
pack
it
and
lift
back
came
up.
A
I
feel
like
if
there
are
situations
where
we
feel
tempted
to
document
something
in
like
packet
or
lip
pack.
That's
a
good
signal
that,
like
maybe
we
should
work
on
extending
our
libraries
or
even
the
spec
itself.
It's
like
I'd
love
to
tell
people
like
an
easy
way
to
choose
between
like
three
versions
of
a
dependency
which
is
sort
of
infrastructure
we've
built
in
pocketo.
That's,
like
you
know,
using
the
metadata
section
of
buildpacktoml
and
the
library,
but
instead
of
documenting
that
it's
like
could
that
become
a
first
class.
C
I
think
we
we've
spoken
about
this
in
the
past,
like
like
there's
this
tool
section.
We
have
documented
with
like
common
tasks
that
you
want
to
do
as
a
buildback
author
and
which
would
be
nice
to
have
some
reference
implementation
for
provided
by
the
buildback
side,
but
like
as
soon
as
you
hit
that
it
becomes
very
opinionated
in
a
way
specific
to
the
language.
C
A
C
A
B
So,
let's
see
here,
I
can
just
read
from
this
document
why
rest
not
go
go
likely
the
language
of
choice.
People
will
think
of
for
solving
portability
concerns
with
cross
platform.
However,.
B
Go
simple:
abstractions,
optimize,
simple
over
correctness
with
all
os's
most
of
those
apis.
Much
like
node.js
apis
are
designed
for
unix-like
operating
systems
using
blog
windows
files,
don't
have
modes,
stat,
stat,
f,
stat
sys
calls,
but
it's
common
for
file
apis
and
go
javascript
and
node
to
be
unix
first.
So
I
think
that's
just
the
theme
was
you
know.
Rust
is
gonna.
A
B
Really
the
primary
driver,
strong
typing
and,
and
just
I
guess
you
know,
preference,
so
that's
kind
of
the
logic.
B
A
A
B
B
You
know,
so
I
think
that's
part
of
it
efficiency
velocity
and
then
you
know
there
is
one
other
comment
in
there
that
you
know
I
hope
no
one
takes
offense
to,
but
some
maybe
some
ownership
like
anxieties.
You
know
sharing
a
project
with
a
wider
group,
less
control.
You
know.
B
A
Yeah,
that's
great.
I
think
this
group
is
very
interested
in
all
that
feedback
like
whether
you
want
to
use
our
library
or
not
like
you
know,
use
whatever
is
most
helpful
to
you.
But
what
we
really
want
is
feedback
about
your
experience
of
being
an
author
of
a
cloud-native
buildback.
So
we
can
use
that
to
inform
priorities.
B
C
C
C
C
C
The
idea
was
that
there
would
be
so
it's
not
fully
defined
here,
but
the
idea
was
there's.
There
could
potentially
be
a
read-only
workspace
where
all
the
packs
read
the
application
source
code
from
and
then
specific
workspaces
that
they
output.
To
so,
let's
say
all
the
java
build
packs
want
to
work
together
and
output,
a
jar
or
some
somewhere.
That's
not
the
application
workspace
and
you
want
to
keep
your
config
file
somewhere
else.
C
You
want
to
keep
your
front
end
compiled
assets
in
some
other
workspace,
so
the
the
idea
was
was
to
just
have
multiple
workspaces.
This
is
also
achievable
right
now,
just
like
sub
directories
and
slash
workspace.
C
The
other
benefit
was
that
you
could
change
the
location
of
this
output.
Workspace
from
just
slash
workspace
from
subdirectory
to
some
other
part
and
the
build
packs
that
that
want
to
collaborate
on
a
common
workspace
can
just
declare
that
hey
the
workspace
I'm
interested
in
is
the
java
one.
You
can
you
can
map
the
java
one
to
be
something
like
slash
whatever.
I
am
not
really
familiar
with
javascript
but
like
whatever
folder
you
want
to
put
things
there
versus
or
like
node.
C
This
was
like
an
extension
of
this
rfc
because,
like
we
were
dealing
with
so
many
use
cases
at
once,
we
wanted
like
a
way
to
export
out
specific
folders.
We
wanted
a
way
for
multiple
build
packs
to
collaborate
on
common
workspaces,
and
while
we
were
doing
this,
one
of
the
ideas
came
up
that
what
if
the
original
workspace
was
just
read
only
because,
typically,
when
a
build
pack
detects
something
it's
read
only
by
the
time
it
gets
to
building
it.
C
If
one
of
the
previous
build
facts
has
just
wiped
the
workspace
completely,
the
buildback
that
was
detected
now
has
nothing
to
do
so.
It
just
fails
if
the
user
or
the
previous
buildback
didn't
leave
appropriate
files
for
it.
C
Like
the
typical
example
was,
if
you
want
to
build
an
application
that
has
both
a
front
end
and
a
back
end
with
the
back
end,
written
in
something
like
go
or
java,
they'll,
typically
just
clear
out
the
workspace,
and
by
the
time
the
node
build
pack
starts.
Compiling
things
like
those
would
be,
those
files
would
be
already
gone.
C
Then
there
was
some
other
conversation
around
like
how
we
can
use
this
to
deal
with
monoraipus,
which
again.
C
E
Yeah,
I
mean
we
certainly
have
some
of
those
problems
you
know
like,
but
we're
very
dependent
on
the
order
that
build
packs
run
in
and
adding
new
build
packs
is
a
exercise
in
really
making
sure
you
understand
that
order.
You
know
to
know
where
things
get
plugged
in
you
know.
So
far
it's
been
manageable
but
being
able
to,
I
guess,
reduce
some
of
that
dependency
underwater
would
certainly
be
a
win
for
us.
I
think
my
when
I
was
reading
that
thread.
E
My
concern
was
kind
of
just
around
trying
to
understand
like
like
it
seems
to
flip
the
kind
of
paradigm
around
and
like
right
now
it's
default
all
in,
and
this
is
kind
of
a
default,
nothing
in
which
you
know
it
is
fine.
E
You
know,
because
we
are
paring
out
quite
a
bit
of
stuff,
you
know
so
if
we
copy
in
or
delete
out,
you
know
this
same
thing
for
early,
but
it
sounds
like
there
are
some
potential
benefits
too
to
doing
that,
I'm
a
little
unclear
about
the
multiple
workspaces.
How
that
would
like
like.
Does
that
then
compress
down
into
a
single
workspace
in
the
final
image,
or
do
you
then
have
to
coordinate
those.
C
You
you
would
definitely
have
to
coordinate
those,
I
think
so.
There
have
been
a
couple
of
rfcs
that
got
pulled
in
so
one
of
those
was
the
the
the
process
specific
working
directory
proposal
so
like,
since
these
workspaces
would
be
different
directories.
If
you
have
specific
processes
that
work
with
that,
like
you
have
a
worker
process
and
like
some
server
process,
your
worker
process,
maybe
in
some
different
directory
and
different
workspace
and
the
buildback
that
created
it,
would
launch
it
in
that
workspace
or
like
cross,
build
back
communication.
C
The
idea
was
that
you
give
a
workspace
identifier
and
the
buildback
knows
the
location
of
that
common
workspace
identifier,
and
they
can
put
things
in
that
shared
workspace.
The
way
they
do
right
now
so,
rather
than
hardcoding
it
to
like
slash
workspace.
They
know
the
name
of
the
common
workspace
they're
interested
in
sort
of
similar
to
the
contract.
We
have
around
provisions
and
requirements
right
now.
There's
it's
just
a
string
identifier
and
buildbacks
know
how
to
communicate
with
each
other
using
that
string
identifier.
C
Similarly,
the
banks
that
you
want
to
work
with
that
want
to
work
together
or
share
information
about
the
workspace
that
putting
things
in
would
just
use
that
string
identifier
to
map
to
a
directory
and
then
appropriately
set
the
process.
Entry
points
or
like
any
environment,
variable
paths
or
whatever.
Based
on
that.
A
Seems
like
it
feel
like
you
have
to
be
aware
of
a
lot
more
yeah,
because
right
now,
it's
just
one
of
the
working
directories.
How
bill
packs
become
aware
of
the
workspace
and
then
there's
also
a
lot
of
conventions
around
like.
If
I
create
an
exact
d
script,
it
runs
in
the
workspace
and
then,
like
you
know,
would
you
want
different
ones
to
run
in
different
places
same
with
the
launching
of
the
process
itself.
E
I
also
wonder
if
it
results
in
duplicate
files,
if
one
build
pack
is
copying
a
bunch
of
stuff
into
its
workspace
and
another
is
copying
stuff
into
its
workspace,
if
that's
going
to
result
in
the
same
things
being
copied
to
multiple
different
locations
like
if
they
don't
coordinate
at
a
really
basic
level.
You
know.
C
You
can't
you
have
to
know
what
provisions
a
specific
buildback
like
what
the
exact
names
and
the
structure
for
the
metadata
for
a
requirement
is,
and
you
you
wouldn't
magically,
be
able
to
just
set
that
to
anything
you
you
have
to
know
exactly
what
the
other
build
packs
in
the
group
are
doing
in
order
to
achieve
communication
between
buildbacks.
This
is
just
expanding
that
from
provisions
and
requirements
to
working
directories.
C
A
E
I
mean
you
can
have
you
know,
I
guess
what
I
thought
is
like
with
paquetto.
We
coordinate
things
pretty
closely
because
we
control
you
know
we
control
all
of
our
build
packs,
but
if
we
pull
in
a
third-party
build
pack
that
maybe
just
wants
to
add
on
some
functionality
at
the
end,
let's
say
it
needs
some
application.
Files
probably
has
no
idea
what
pochetto's
doing
and
it's
just
going
to
copy
some
swap
of
the
application
into
its
own
workspace.
E
You
know
and
that's
to
me
where
you
could
end
up
with
some
duplication.
You
know
in
that
resulting
image,
and
I
only
mentioned
that,
because
anytime,
we
have
the
slightest
bit
of
duplication.
People
get
really
upset
about
making
their
images
larger
than
they
should
be,
which
is
what
it
is,
but
maybe
just
something
to
consider.
C
Again,
it's
it's
about
the
buildbacks
you're
using
and
whether
they
coordinate
together
right,
like
there's,
nothing
right
now
in
the
api
that
says
all
the
buildbacks
you
use
and
in
the
group
like
work
effectively
with
each
other.
The
life
cycle
of
the
platform
doesn't
really
have
any
guarantees
to
to
make
sure
that
legal
packs,
you're
choosing,
are
compatible
or
not
so
just
run.
C
D
Just
like
we're
talking
very
abstractly
about
this
concept,
maybe
you
have
examples
sam,
where
you're
already
doing
this
by
another
means
in
your
build
packs,
I
can
definitely
think
of
like
the
dot
net
core
language,
family,
build
packs
and
picato
are
doing
a
thing,
that's
very
similar
to
what
you're
describing,
I
think
where
we
have
multiple
build
packs,
contributing
to
what
we
call
the
dotnet
route,
which
is
like,
where
sort
of
all
the
important
like
libraries
and
runtime
get
installed,
and
they
all
do
need
to
be
co-located
as
far
as
dot-net
itself
is
concerned,
and
so
we
do
some
hacky-ish
things
to
ultimately
get
it
to
work
where
we're
dumping
stuff
into
a
specific
location
inside
of
slash
workspace,
and
it
does
introduce
coupling
that
kind
of
can
be
painful
from
like
a
developer
perspective,
and
I
don't
know
that
this
proposal
would
ameliorate
that
necessarily.
D
C
Yeah
my
at
least
my
motivation
for
this
was
like
hey.
We
have
an
ecosystem
that
works
similarly
to
what
you
described
in
dotnet,
except
we
need
to
keep
it
at
a
specific
path
like
it
needs
to
be
a
specific
part
that
is
not
slash,
workspace
and,
and
the
way
we
get
around.
That
right
now
is
through
some
hacky
things,
with
the
stack
and
sim
links
to
get
the
workspace
of
directory
to
map
out
to
that
other
location.
C
And
the
this
proposal
specifically
was
to
get
get
around
those
kind
of
issues
like
the
other
example.
We
had
was
the
aws
extensions
one
where,
like
they're
gonna,
be
in
somewhere
like
app
aws,
something,
and
then
you
put
the
extensions
there
and
like
if
you
have
multiple
extension
providers,
they
might
want
to
work
within
that
directory.
C
The
other
comment
around
like
opt
in
versus
opt
out
with,
like
in
terms
of
file.
Copying
behavior,
like
leaving
everything
from
the
application
source
code
by
default
in
the
output
image,
definitely
goes
against
a
lot
of
the
security
aspects
of
buildbacks.
We
aim
for
you're
leaving
source
code
files.
There
you're
leaving
potential
secrets
there
you're
living,
leaving
your
git
directories
there
with
all
of
your
past,
commit
data
unless
there's
a
buildback,
that's
specifically
cleaning
all
of
those
things
out.
C
It
also
is
very
much
different
from
how
buildbacks
typically
work
they
detect
that
here's
a
file
I'm
interested
in
and
here's
a
series
of
steps
I'll
perform
on
on
the
series
of
files
I'm
interested
in
now.
The
buildback
has
to
be
aware
of
not
only
the
files
it's
interested
in,
but
also
potentially
any
other
files.
It
has
to
clean
out
so,
like
the
the
the
current
concept
of
detect
goes
better
with
the
build
pack
selectively
taking
out
pieces,
it's
interested
in
from
the
app
folder
to
the
directory.
C
I
think
there
are
like
two
concepts
here:
there's
like
the
multiple
lab
directory
stuff
and
there's
one,
which
is
like
a
read
on
the
application
source
code
input
getting
built
into
some
output
and
the
buildback
selectively
deciding
which
pieces
it
needs
to
take.
E
A
The
other
argument-
I
guess
I
would
make
in
terms
of
not-
I
need
to
read
the
proposal
about
multiple
arbitrary
workspaces,
to
know
how
this
maps,
but
in
the
past
I've
sort
of
wished.
We
had
two
an
input
workspace
and
an
output
workspace,
mostly
not
for
our
existing
workflows,
but
when
you
think
about
what
like
a
develop,
api
would
look
like.
A
So
if
you
want
build
packs,
be
able
to
sort
of
produce
change,
sets
that
can
be
loaded
like
live
reloaded
into
a
running
container
for
some
of
these
sort
of
like
tilt
and
local
dev
workflows.
What
you
really
want
is
for
build
pecs
to
be
able
to.
You
know,
look
at
changing
inputs,
and
then
you
know
make
their
modifications
to
the
output.
E
C
C
C
We
discussed
a
lot
of
them
so
that
like
if
we
introduce
something
like
this,
how
could
it
be
used
in
the
future
to
enable
use
cases
like
the
ones
we're
just
talking
about?
But
I
I
didn't
actually
describe
all
of
those
in
this
rfc
yeah.
There.
A
Are
a
couple
non-starters
that
we
should
probably
talk
about
before
we
develop
this
too
far,
which
is
number
one?
Is
this
implicitly
assuming
that
all
build
packs
will
upgrade
their
build
pack
api
at
the
same
time
like?
Is
there
any
world
in
which
you
can
upgrade
one
at
a
time
in
this
workspace
situation,
because
if
not,
that
might
just
be
a
hard
blocker
and
then
number
two?
A
You
know
there's
trade-offs
there,
but
it
seems
like
that's
like
in
the
view
of
the
project
as
a
whole,
like
an
not
okay
type
of
breaking
trend
and,
like
I
accept
that
hearing
the
counter
arguments
and
if
that
is
like
a
not
okay
type
of
freaking
change,
I
see
that
as
like
pretty
trivial,
especially
if
you're
sharing
a
library
to
move
from
you
know.
Reading
the
second
argument
to
reading
a
specific
environment
variable
so
then
like.
C
So
that's
why
I
I
didn't
include
all
of
this
in
the
current
rfc,
exactly
because
of
those
issues
like
how
do
we
preserve
backwards
compatibility
where
current
buildbacks
just
assume
everything
left
there
will
be
copied
out
and
we
didn't
have
a
good
solution
for
it.
Last
we
discussed
it.
It
was
something
that
was
desirable,
but
we
couldn't
think
of
an
immediate
solution
for
backwards
compatibility,
which
is
why
we
just
left
it
out
from
this
rfc
state.
C
Also,
to
give
you
some
context
like,
I
barely
have
any
time
to
work
on
this
rfc
anymore.
My
main
focus,
at
least
from
the
buildbacks
perspective
right
now,
is
just
like
the
security
and
integrity
and
cosine
parts
of
it
so
like.
If
I
have
time
right
now,
I'm
I'm
mostly
going
I'm
mostly
gonna,
be
working
on
that
this.
A
C
B
And
so
I
understand
you're
going,
for
I
mean
slimmer,
more
secure
without
having
to
write
an
additional
build
pack
to
remove
you
know,
sensitive
files,
unnecessary
files
that
are
have
already
been
compiled
for
you
know,
compiled
applications.
That
kind
of
that
kind
of
work
seems
to
you
know
burdensome,
whereas
if
it
by
default
forces
you
to
indicate
what
you
want
in
the
output
image,
then
you
know
you're
in
control
from
the
start.
The
problem
is,
you
already
have
a
spec
out
there
and
anyone
using
it
is
going
to
have
to
do
that
work
to
upgrade.
C
So
they
had
to
figure
out
some
implementations
just
copying
the
original
source
code
to
a
separate
volume
and
then
providing
that
volume
to
the
buildbacks.
I
don't
fully
understand
the
context
there
because
I
don't
use
tecton
or
I
don't
know
how
or
what
you're
doing
that
would
require
you
to
continuously
have
the
source
code.
I'm
guessing
some
sort
of
ci
cd
pipeline,
where
you
start
with
a
ticked
on
task
that
takes
in
your
source
repository
will
tax
build
it,
and
then
you
want
to
do
something
beyond
that
with
the
original
source
code.
A
A
A
I
think
if
we
were
something
for
just
that
problem,
the
solution
I
would
want
to
do
is
something
like
what
we
have
with
slices
but
like
the
anti-slice,
whereas
right
now
you
can
sort
of
like
have
a
glob
that
matches
some
fraction
of
the
workspace,
and
then
it
becomes
a
separate
layer
and
you
can
use
that
to
optimize
performance.
A
But
if
you
had
a
glob
that
matched
some
fraction
of
the
workspace
either
by
you
know,
including
or
excluding
and
then,
instead
of
just
making
it
a
separate
layer,
you're
like
I'm
just
not
going
to
include
that,
I
think
would
be
a
way
to
get
deleting
behavior
without
deleting
anything,
but
it
doesn't
solve
for
all
the
use
cases.
That's
talking
about.
That
would
be
like
a
specifically
engineered
for
tecton
problem
type
solution.
C
A
Like
maybe
whiteout
slices
kind
of
gets
at
some
of
the
other
use
cases
right
like
back-end
and
front-end
use
case,
but
it
gets
confusing
because
everyone's
still
making
changes
to
the
same
directory
and
then
like
the
things
that
you're
removing
you're
removing
behind
the
scenes
in
a
way
that
no
one
else
can
see
what
you've
done.
A
C
There
was
another
one
of
the
use
cases
that
we
talked
about,
which
again
I
didn't
put
in
the
rfc
was
like
the
the
whole
sort
of
similar
to
your
whiteout
example.
Like
you,
you
still
like
you're
saying
that
I
don't
want
this
in
my
opportunity.
The
idea
was
port
of
multiple
workspaces
mapped
out
to
different
environments.
C
So,
with
a
single
build
process,
you
end
up
with
multiple
images,
one
you
can
use
for
developing
and
testing,
which
has
your
test
and
dev
dependencies
and
then
one
which
has
your
production
dependencies,
which
is
like
sort
of
what
you
can
potentially
achieve
that
using
docker
multi-state
as
well.
Like
you
start
with
something,
then
you
bifurcate
your
flows,
one
produces
a
dev
image
stall
step
dependencies
and
then
another
target
produces
the
broad
image
with
broad
dependencies.
But
it's
a
single
docker
file
that
describes
everything.
C
A
You
know
looking
at
the
list
of
things,
if
I
was
prioritizing,
this
would
not
be
the
top
one.
So
I
wonder
if
as
a
project
just
so,
we
can
get
more
done,
which
would
be
a
bit
more
ruthless
about
saying
what
we're
focusing
on
at
any
given
moment
versus
what
we're
not.
A
I
had
some
ideas
around,
like
you
know,
more
explicitly,
limiting
the
number
of
workflows
we
have
going
through
by
like
having
champions
in
life.
Once
you
only
champion
one
rfc
so
like
if
you're
championing
one,
then
your
bandwidth
is
sort
of
used
up
and
other
things
that
no
one
is
champion
get
put
on
hold
some
ideas
like
that.
I'm
not
wedded
to
that
particular
one,
but
I
want
to
find
a
way
to
sort
of
figure
out
what
our
bandwidth
is
and
then
just
try
to
work
on
that
number
of
things
rather
than
like.
A
C
You
should
be
more
explicit
about
marking
rfcs
to
a
specific
team,
and
you
need
at
least
one
maintainer
from
that
team
to
champion
your
rfc,
that
it
will
be
implemented
and
actively
worked
upon.
Otherwise,
you
just
put
the
rfc
on
hold
until
you
find
a
maintainer
or
a
contributor
who's
aware
of
the
changes
like
someone
who's
doing
the
leg,
work.
C
D
Yeah,
like
from
my
perspective,
trying
to
stay
up
to
date
on
like
oh,
what
are
the
changes
that
are
gonna
be
coming
from
upstream?
You
know
what
changes
are
we
gonna
need
to
make
in
package
soon.
You
know
I'm
like
following
github
issues
for
like
rfcs
that
have
been
accepted
or
just
like
every
so
often
I
check
on
rfcs
and
it's
yeah.
Sometimes
it's
really
hard
to
tell
even
from
looking
at
the
notes
and
pr's
like
do.