►
From YouTube: CNB Weekly Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okie
dokie
do
we
have
any
new
faces
today.
I
see
familiar
faces.
B
I
think
we'll
ship
the
release
candidate
for
life
cycle
0-14
today
we
also
shipped
a
patch
last
week
fix
a
bug
with
caching
and
s-bones.
A
Awesome
thanks
for
the
update
what
about
platform
javier,
any
updates
on
platform.
C
I
just
got
back
so
I
have
not
really
seen
what
all
has
occurred
but
yeah.
I
have
not
heard
much
from
david
nikki
there
that
I
could
tell
so
I'll
I'll
ping
back
circle
back
around
next
week.
A
Sounds
good
distribution
any
updates.
A
All
right,
okay,
moving
on
from
release
planning
up
next
on
our
list
of
future
topics
that
I
copied
down
is
the
process
reaper.
But
I
don't
see
the
bloomberg
folks
here
at
least
right
now,
so
I'm
going
to
move
that
to
the
end
of
the
list,
because
I
assume
that
they
would
like
to
be
part
of
that
conversation
and
we
can
move
on
to
providing
run
image.
S-Bombs,
don't
talk
about
that!
One
natalie.
B
You
can
see
my
screen,
okay,
so
and
it's
the
right
one,
this
one!
Okay,
let
me
go
to
rfc,
so
basically,
anthony
had
put
up
an
rfc
prior
to
this.
That
was
about
requiring
the
s-bomb
for
a
run
image
to
be
baked
into
the
image
itself
and
designated
with
a
label.
B
In
this
alternative
section,
but
basically
following
a
conversation
and
working
group
a
few
weeks
ago,
we
decided
that
it
would
be
interesting
for
platforms
to
just
supply
the
s
bomb
at
the
point
of
build
sorry,
the
s
bomb
for
the
run
image
when
doing
a
build,
and
we
just
basically
making
it
more
responsibility,
the
platform
to
ensure
that
that's
accurate
and
anyway,
I
put
this
rfc
up,
which
is
really
just
sort
of
summarizing.
B
My
understanding
of
based
on
that
discussion,
how
that
could
work,
I
think,
there's
been
there's
been
some
feedback.
You
know
certain
things
that
need
to
be
understood
better.
D
A
I
can
represent
some
of
sam's
concerns
here,
because
I
think
this
was
largely
a
sticking
point
for
him,
but
it
it's
mostly
about
folks
who
then
extend
run
images
with
docker
files,
which
is
a
pretty
common
practice
where,
if
you're
not
aware
of
the
s-bomb,
that's
baked
into
the
run
image.
Now,
you've
made
a
bunch
of
changes
to
that
image.
But
the
image
is
reporting
an
inaccurate
s-bomb,
because
it's
baked
in
there.
D
How
does
that
get
better
if
it's
separate
from
the
image,
though
right?
If
someone
extended
the
run
image
that
didn't
have
the
s-bomb
in
it
and
then
provided
an
s-bomb
that
was
associated
with
the
version
before
that
was
extended
with
a
docker
file
or,
like
I
mean,
is
there
a
process
or
something
that
bloomberg
is
using,
where
it
makes
it
more
difficult
to
establish
like
to
track
versions
across
s-bombs
or
something
if
they're
put
together.
A
I
think
the
s-bomb
can
always
be
wrong
right,
but
it's
a
question
about
whether
you're
being
deliberate
or
not
when
you're
providing
it.
It's
like
in
the
case
was
baked
in
there
be
a
whole
system
that
would
just
assume
it
was
correct
and
you
wouldn't
even
know
that
it
had
gone
out
of
date
or
as
if
you're
explicitly
passing
it,
and
you
know,
platforms
can
add
different
checks
in
different
environments
to
say
like
yes,
this
is
the
run
image
s
bomb
that
you
should
be
using.
A
D
You
could
capture
the
last
layer
in
the
image
in
the
askbom
like
the
last
diff
id,
and
then
you
would
know
if
the
image
was
extended
at
least
and
you
wouldn't
have
to,
but
you
could
still
extend
it
without
invalidating
it
as
long
as
you
know
somehow
communicated
that
it's
okay,
that
the
additional
layers
don't
validate
it,
but
I
don't
have
a
strong
opinion.
I
think
if.
A
D
D
A
My
concerns
here
are
that
I
think
we
either
need
better,
tooling
first
to
help
people
create
stacks.
That
would
then
have
accurate
s
bomb
baked
in
or
we
need
to
let
play
we
could
let
platforms
handle.
This
is
another
option
right.
It's
like
putting
a
lot
of
onus
on
to
stack
creators
when
there's
no
tooling
to
support
them.
D
Think
in
the
end,
we
want
it.
So
when
you
pack
build
right,
you
get
or
or
use
capec
or
any
of
the
kind
of
common
platform
tools
you
get
an
s-bomb
that
includes
the
run
image
bomb.
If
the
running
adjust
bomb
is
available
right,
and
so
it
means
that
if
we
don't
attach
it
to
the
run
image
like
we're
doing
with
the
app
images,
then
you
know
we
need
to
say
pax
cli
is
going
to
read
either
cosine
or
you
know,
if
a
cosine
s
bomb
is
available,
we'll
check
for
that.
D
I
we
have
to
make
those
checks
every
time
right.
We
do
a
build
in
order
to
be
able
to
pull
this
stuff,
and
so
especially
that
if
we
don't
standardize
an
internal
format,
you
know
we're
always
going
to
be
reaching
out
to
the
registry
to
check
for
these
additional
images,
which
is
okay,
just
don't
love
that
either.
D
Yeah,
I
think,
as
soon
as
we
say,
we're
not
mandating
that
the
format
be
part
of
the
image.
I
think
people
will
use
standardized
tools
right,
like
cosine,
in
order
to
create
the
signature,
which
means
that
if
we
want
to
get
that
guarantee,
then
we
should
have
all
the
platforms
check
for
all
the
formats
right
and
so
pack
should
reach
out
to
the
registry
to
look
for
the
cosine
s
bomb
and
the
or
the
cosine
attestation.
Or
you
know,
on
every
build.
C
D
I
I
think
it's
an
implication
of
not
this
rfc
is
saying
we
shouldn't
include.
We
shouldn't
attach
s-bomb
to
the
you
know,
run
images
right.
We
should
let
that
be
provided,
be
bundled
right,
right,
yeah.
That
means
that
means
that
when
pac
does
a
build
or
when
k-pac
does
a
build
in
order
to
source
s-bomb
information
to
end
up
in
the
final
image
right
as
a
platform
you're
going
to
have
to
check,
for
you
know
many
potential
sources
of
s-bomb
for
the
run
image
right.
D
So
that
means
the
pac
or
k-pack
or
whatnot
will
have
to
reach
back
out
to
the
registry
and
see
if
they're
at
the
stations-
or
you
know,
cosine
s
bonds,
maybe
check
the
signature
of
those
cosine
s
bonds
right,
so
you
know
it
could
be,
or
this
could
cause
us
to
increase
the
number
of
requests
we're
making
back
to
the
registry
during
the
build
process.
C
Yeah,
I
mean,
even
if
we
dove
a
little
bit
more
deep
into
that
anything,
that
the
platform
does
at
that
point,
where
it
inspects
various
potential
sources
for
s-bombs.
That
is
still
something
that
the
life
cycle
itself
could
do
right
it
already.
Does
it
already
analyzes
things
in
regards
to
the
app
image
I
I
can't
see
why
I
wouldn't
be
able
to
analyze
the
run
image
as
well.
D
B
What
I
suggested
was
the
preparer
this,
like
kind
of
yet
to
be
fully
defined
like
component.
That
platforms
can
provide
and
javier
correct
me
if
I'm
wrong,
but
I
think
your
rfc
says
that
we
should
have
a
reference
implementation
so
like
maybe
our
reference
implementation
could
include
it.
I
don't
know
if
we
need
to
bake
it
into
the
life
cycle
exactly,
but
you
know
yeah.
The
project
can
provide
something
to
help
this
along.
C
Okay,
yeah,
that
makes
sense
so
basically
going
back
to
providing
some
sort
of
tooling
for
platforms
to
be
able
to
easily
access
these
features.
E
E
E
A
Yes,
I
feel
like
changing
the
output
format
of
the
s-bomb
is
a
different
can
of
worms
right,
but
this
is
just
about
the
input
format
of
the
run
image
s-bomb
and
if
we
provide
it
as
a
flag,
then
we
have
the
flexibility
for
a
platform
to
either,
like
you
know,
do
the
scanning
on
the
fly
and
then
provide
it
so
that
it
can
make
sure
that
it's
in
the
right
format
and
is
up
to
date,
or
you
know
different
if
we
wanted
stack
authors
to
bring
their
own
s-bombs.
A
E
I
think,
like
the
the
easiest
way
to
start
with,
if
you
wanted
dumb
implementation,
is
to
ask
the
user
for
like
the
file
on
disk
that
the
platform
includes,
then
we
can
add,
shift
and
fetch
things
from
other
stations
or
attached,
s-bombs
or
then
like.
We
can
have
something
that
just
scans
the
image,
if
something's
missing
and
attaches
it
so
like
we
can
do
it
progressively.
There's
nothing
in
this
rfc
that
says
that
you
must
do
it
in
a
certain
way.
D
I'm
still
a
little
worried
about
the
user
experience
just
like
taking
the
pax
cli
as
an
example
right
like
to
me,
the
biggest
risk
is
we
introduce
support,
for
you
know
we've
kind
of
carefully
designed
this
api
so
that
at
every
point
you
know
you
can
definitely
get
an
accurate
kind
of
build
materials
of
every
piece
that
gets
installed
and
so
at
any
place.
Where
there's
like
a
non-obvious,
you
know,
hey,
you
have
to
figure
out
where
you're
gonna,
you
know
get
your
s-bomb
from
here
right.
D
Many
users
are
going
to
miss
that
right
and
the
run
image
is
a
big
source
of
you
know
potential.
It's
like
a
large
number
of
packages
that
kind
of
can
come
in
through
that
avenue
right,
and
so
it
puts
it
on
the
platform
to
say
whatever
build
process.
We
use
we're
going
to
make
sure
with
certainty
right
that
if
there's
not
an
s
bomb
available,
we
can
automatically
grab
s-bombs
for
all
these
components
and
if
there's
not
one
available,
then
we're
going
to
make
it
very
clear
to
the
user
that
that's
true
right.
D
Nothing
about
this
necessarily
blocks
that
right.
But
then
you
know
if
the
suggested
implementation
for
this
for
parent
is
to
integrate
sift.
Are
we
comfortable
with
pac,
integrating
sif
directly
and
making
that
you
know
part
of
the
the
pack
experience
for
all
users
right
and
if
we
are,
are
we
comfortable
with
sift
actually
pulling
the
run
image,
and
you
know
opening
it
up
locally
right
and
scanning
it,
and
you
know
generating
an
s
bomb
on
every
build
when
the
user
doesn't
use
a
run
image
that
has
an
s
bomb
like
that.
D
That
seems
like
it's.
You
know,
I'm
not
saying
that's
the
wrong
thing
to
do
right.
That
might
be
even
better,
because
it
means
that
the
user
uses
a
custom
s
bomb
or
you
know
a
custom
run
image
right,
we're
we're
doing
the
absolute
best
we
can
to
get
the
data
there,
but
that
definitely
has
performance
implications
too.
I
want
to
make
sure
that,
if
that's,
if
that's
the
user,
experience
that
we're
talking
about
delivering
for
back
that
we're
all
everybody
is
okay
with
that
our
workflow.
A
A
Conversation
with
me,
natalie,
sam
and
steven
separately,
maybe
anyone
else
deeply
invested
in
the
run
image
s
bombs.
A
F
Oh
because
we
were
having
some
discussions
during
core
team
sync
and
I
was
trying
to
shut
that
down
a
little
bit
either
to
either
do
it
here
or
in
kind
of
that,
and
we
made
talking
about
it
again
and
bad
on
friday.
F
F
And
then
I
think
aiden
was
the
one
who
opened
this
rc
that
talks
about
potentially
adding
a
dash
lib
cnb
flag,
as
well
as
a
kind
of
template,
one
that
could
kind
of
be
hydrated
from
any
url
that
I
assume
followed
a
specific
that
followed
the
kind
of
format
that
he
has
listed.
F
Rfc,
so
it's
gonna
be.
F
Spacey
there's
going
to
be
in
that
template
a
prompt.yaml
that
allows
to
basically
specify
a
certain
set
of
prompts
that
you
can
ask
a
user
and
then
kind
of
I
believe
the
rest
of
it
will
be
just
kind
of
the
skeleton
of
the
thing
you're
generating
and
then
you
can
have
in
those
files.
Basically,
this
templating
substitution
to
kind
of
replace
values
and
then
kind
of
built
in
these
list.
F
These
flags
here
the
id
version
api
stacks-
will
be
kind
of
independent
of
prompt
that
would
be
applicable
kind
of
across
any
template
that
we're
going
to
do.
This
allows
you
to
kind
of
generate
it.
I
think
some
of
the
unresolved
questions
from
the
discussions
from
the
actual
pull
requests
are
around.
F
I
think
he
wanted
lib
cnb
at
least
to
be
offline,
so
it
would
be
kind
of
baked
into
pack.
So
us
updating,
it
will
require
changes
to
pack
every
time,
and
I
also
had
some
questions
around
kind
of
like
default
values
which
we
should
have
any
or
if
a
template
could
provide
default
value
for
some
of
the
stuff.
F
I
think
that's
basically
the
overview
of
it.
I
know
just
briefing
from
yesterday
talking
with
javier
from
the
meeting
of
just
where,
where
should
this
stuff
live
kind
of
ownership
of
who
would
actually
own
this
stuff?
I
imagine
the
bat
team
seems
to
be
the
right
home
for
at
least
owning
the
live,
cmb
template
and
also
may
have
interest
in
helping
out
for
the
broader
template
stuff
as
a
whole,
but
kind
of
open
for
discussion
and
other
things
related
to
that.
E
E
Was
inspired
by
that
for
this
whole
rfc
and
there
were
a
bunch
of
open
questions
around
like
replicating
some
of
the
functionality
we
have
there
in
into
this
rfc.
So
cookie
cutter
is
a
python
project.
It's
not
easy
to
find
an
equivalent
for
it
in
golan,
and
it
provides
things
like
validation
of
inputs
being
able
to
provide
these
prompts
non-interactively
being
able
to
provide
like
a
list
of
options
or
choose
a
default
value,
among
other
things,
and
being
able
to
run
free
and
post
scaffolding
books.
E
So
we
we
sort
of
use
all
of
those
to
generate
like
an
appropriately
scaffolded
repository
at
the
end
of
it.
So
most
of
my
open
questions
were
around
like
what.
How
do
we
deal
with
that,
and
also
in
terms
of
just
the
offline
behavior?
I
think
it
might
be
easier
to
just
keep
the
template
separate
from
like
the
actual
tool
that
can
take
in
a
url
or
something
and
then
scaffold
it
even
in
terms
of
air
cap
environments,
people
have
git
repositories
or
something
that
they
can
point
this
tool
to.
E
So
it's
it's
not
like
back
needs
needs
access
to
the
internet.
It's
just
that
it
will
need
access
to
some
network
that
has
a
git
repository
which
contains
the
the
original
template,
but
I
I
think
it
will
be
very
like
it's
going
to
be
very
tedious.
If
each
time
we
make
an
update
to
the
template,
we
have
to
go
and
bump
the
version
and
back
and
release
that,
for
the
end
users
to
get
the
new
one.
E
E
F
I
mean
it's
it's
where
the
current
command
lives
as
well,
for
what
that's
worth
yeah,
so
you're
not
really
paying
your
there.
I
think.
D
In
the
end,
we
want
an
old
version
with
one
like
like
we're,
not
looking
to
separate
the
release
process
for
the
templating
functionality
from
the
release.
Processor
pack
right.
E
Yeah
but
the
the
I
think,
the
template
generation
tool
once
created.
It's
unlikely
to
have
changes
to
it.
The
templates
themselves
will
have
new
flags
or
like
support
new
apis
or
different
behaviors,
but
the
scaffolding
tool
itself
is
fairly
generic,
like
just
for
reference.
The
the
current
tools
that
it
refers
to
have
been
mostly
unchanged
for
the
last
three
or
four
years.
A
A
If
we
had
a
beautiful
gill
library
for
that,
I
could
see
that
having
uses
and
pack
using
it
to
provide
certain
commands
is
just
one
of
them,
like
especially
a
lot
of
people
who
could
use
pac
functionality
as
a
go
library
are
hesitant
to
because
of
all
the
docker
dependencies
and
other
stuff
that
comes
with
pack,
because
pack
is
pack
so
separating
simpler
things
into
independently
consumable
libraries
would
meet.
Some
users
needs,
but
I
think
they
should
all
still
roll
up
into
pack,
because
that
meets
a
lot
of
users
needs
as
well.
A
D
C
F
Breaking
up
and
and
doing
that
stuff
is
out
of
scope
for
this
particular
procedure.
Right.
F
E
A
E
F
You
were
mentioning
a
curated
list
of
stuff
sam.
I
know
that
was
in
joe's
original
rfc
to
have
potentially
like
a
rebuild
of
repo
of
templates
potentially,
but
maybe
that's
like
not
the
right
format
there
and
we
just
care
about
probably
even
something
more
akin
to
a
registry
that
points
to
something,
but
is
that
something
that
is
not
like
mentioned
all
in
this
rfc,
though
you
did
call
it
out
in
your
explanation
of
the
motivation
behind
this.
Is
that
something
that
you're
interested
in
doing
as
part
of
this?
E
I
think,
like
I
was
hoping
we
could
follow
a
similar
path
as
our
default
builders
or
trusted.
Builders
like
we
can
include
the
templates
that
live
in
buildbacks
initially
and
then
like,
if
some
of
the
other
vendors
that
we've
come
to
trust,
have
their
own
templates
that
use
this
tool
and
they
request
it.
We
can
add
it.
F
E
But
I
think
I
left
all
of
those
as
comments.
Those
are
ideas,
but
nothing
concrete.
A
A
We
could
even
do
something
similar
like
what
docker
hub
does
where
you
know.
Every
image
has
a
full
address,
but
if
you
leave
out
certain
parts
we
fill
things
in
so
we
assume
it's
github,
you
know
build
packs
and
then
whatever
the
name
is
of
the
template.
Something
like
that
could
like
make
commands
shorter.
C
So
what
I
don't
know
if
the
rfc
goes
into
this,
but
do
we
envision
a
repository
per
template
and
not
having
templates
within
a
repository.
F
F
F
I
imagine.
We
also
don't
want
me
in
the
business
of
maintaining
language
findings
for
any
kind
of
lip,
syndi
type
of
thing
and
then
also
well.
Even
if
we
aren't
in
the
game
of
that,
like
even
maintaining
the
templates
that
go
with
supposed
language
bindings.
A
We
probably
don't
want
to
maintain
a
ton
of
templates,
because
you
know
we
can
have
one
or
two
useful
ones,
but
I
think
when
it
comes
to
templates,
people
always
want
their
own
anyways
right,
they're,
going
to
like
fork,
yours
and
customize
it
with
their
particular
flavor
of
stuff.
So
we're
never
going
to
maintain
a
bunch
of
templates
that
are
useful
to
everyone.
We
just
need
to
maintain
some
good
examples
and
then
people
can
derive
their
own
from
there.
F
Well,
I
guess
I
was
saying
like
for
the
docker
library
example.
Like
don't
I
mean
there's
basically
a
ton
of
outside
contributors
that
even
if
you
aren't
maintaining
that
template
per
thing
like
people
who
basically
own
that
whatever
language
binding
potentially
it's
basically.
G
Yeah,
but
that
I
don't
think
our
templates
are
going
to
scale
that
same
way.
I
mean
what
you're
talking
about
is
like
distributions
of
linux,
and
things
like
that,
like
there's
a
lot
of
possibilities
where
we're
talking
about,
like
I
think
one
like,
I
was
saying
one
opinionated
template
that
we
maintain,
I
think
it's,
I
think
what
we
I
think
we
should
focus
on
a
burgeoning
ecosystem
of
templates.
E
And
there
are
also
other
mechanisms
to
discover
templates
like
the
the
docker
live.
The
docker
library
example
is
not
exactly
one
to
one,
since
that
also
is
like
a
registry
where,
like
multiple
people,
can
push
things,
whereas
this
would
be
like
get
repositories
that
we
don't
own
or
like
we'll
have
to
provide
them
some
right
permissions
or
something
so.
F
E
E
I'm
not
sure
the
other
way
that
I've,
seen
templates
being
discovered,
is
just
like
repository
topics
like
if
you
search
cookie
cutter,
for
example,
on
github
it'll,
give
you
a
whole
list
of
templates,
and
then
they
define
certain
tags
in
the
documentation,
saying
that
if
you
have
a
python
cookie
cutter
template
tag
it
with
these
repository
tags
so
that
it's
easier
to
search
on
github.
So
we
can
do
something
similar
like
for
the
discoverability
aspect.
E
A
E
I
mean
because,
like
these
tools
are
well
maintained
and
like
fairly
popular
and
people,
use
them
for
not
just
mailbags,
but
for
like
lots
of
other
project
scaffolding.
G
Yeah,
I
agree
with
that.
I
think
the
challenge
for
us,
though,
is
that
we're
we're
gonna
cut
across
a
lot
of
different
ecosystems,
whereas,
like
the
examples
like
it's
in
in
javascript,
it's
yao
man
and
it's
cookie,
cutter
or
somewhere
else
like
I,
I
think
a
lot
of
those
tend
to
like
interoperate
with
npm
or
or
whatever
so
I
100
agree
with
you.
G
E
Alternatively,
I'm
not
suggesting
we
do
this,
but
there's
also
web
tools.
That
will
do
this
for
you,
so
you
just
you
just
authenticate
to
it
as
a
github
app
and
then
it
scaffolds
things
for
you,
but
again
that
limits
things
to
get
a
hosted
offering
on
the
internet
and
then
so
on
and
so
forth.
So
like
you
could
potentially
have
a
create
button
on
registry
dot
buildbacks
at
iho.
That
does
it
for
you,
if
you,
if
you
wanted
an
experience
like
that,
but
again
that
that
limits
where
and
how
you
can
use
it.
A
C
Yeah,
I
was
gonna
say
like
to
kind
of
counter
on
the
idea
of
you
know:
people
pushing
people
to
let's
say
cookie
cutter
when
they're
used
to
something
else
is
maybe
the
alternative
is
for
us
to
provide
templates
that
we
care
for
which
is
the
bash
and
the
the
lib
cnb
to
provide
those
to
the
various
ecosystems
and
toolings
in
that
fashion.
Right,
so
we
would
have
a
cookie
cutter
template.
We
could
have
the
whichever
one
you
mentioned
for
npm
and
so
forth.
G
It's
a
question
of
what
are
we
optimizing
for
right
like
in
my
mind,
I'm
optimizing,
I'm
optimizing
for
bill
pack,
authors
especially
encouraging
new
ones,
and
I
think
anything
outside
of
pac
built
back
new
is
friction
even
if
it
aligns
with
the
tools
they're
already
using
I
mean
the
reality
is
like
people
aren't
even
people
that
use
yeoman
aren't
using
it
day
to
day
right.
They
use
it
once
a
month,
the
most
probably,
and
so
it's
a
new
tool,
even
if
it
fits
into
their
ecosystem
and
yeah.
I
agree.
G
I
agree
that
there
is
some
additional
burden.
I
think
that's
why
the
first
iteration
of
the
packfill
pack
new
is
so
dead,
simple
like,
but
I
I
I
would
challenge
that
it
is
a
burden
to
maintain
paco
pack
new
today,
like
it
seems
other
than
yeah.
G
Updating
the
the
api
version,
that's
the
default,
which
is
kind
of
annoying
and.
G
Why,
I
think
like
we
should
try
to
keep
it
simple,
but
I
do
think
there's
value
in
providing
experience
from
pac,
but
let's
not
but
yeah.
We
should
ask
ourselves.
I
agree
we
should
ask
ourselves
if
we
are,
you
know
reinventing
the
wheel
and
if
we
want
this
to
become
more
fully
featured,
then
then
yeah,
I
think
maybe
we
should
consider
moving
it
out
of
pack,
but
I
don't
think
what's
in
the
proposal
is
so
far
along
that
we
should
do
that.
E
G
But
it's
but
it's
limited
to
like
the
four
or
five
options
right:
okay,.
E
B
G
E
A
We
only
have
a
couple
minutes
left
at
the
beginning.
We
had
sort
of
kicked
the
process
reaper
to
the
end
here,
mostly
because
we
didn't
have
any
of
the
bloomberg
folks
now,
sam
you
are
here.
I
know
that
you
weren't
necessarily
the
one
leading.
This
is
mostly
benjamin,
but
do
we
want
to
talk
about
it
today
or
do
we
want
to.
E
E
But
for
what
it's
worth
like,
I
think
we'll
need
that
internally
in
bloomberg,.
A
E
C
A
A
A
So
I'm
like
pan
waving
here,
but
it's
like
that,
goes
in
front
of
the
launcher
instead
of
after,
because
I
don't
want
the
launcher
to
do
this,
always
by
default,
it's
sort
of
where
I've
come
down,
because
I
know
we
have
a
lot
of
in
the
past
if
even
when
things
were
kicked
off
with
bash
and
then
people
went
into
the
container
and
saw
bash's
pid
one
instead
of
whatever
their
process
was
because
of
our
launching
logic.
People
didn't
like
that.
A
A
If
I'm
running
a
jar,
I
don't
think
it
feels
a
bit
too
heavy-handed
to
me
to
do
this
in
every
situation,
but
definitely
feels
like
something
where
we
could
provide
an
escape
hatch,
but
maybe
like
tangling
with
changing
the
processes
is
complicated
because
it's
the
wrong
place
and
it
we
should
be
tangling
with
the
entry
point
that
invokes
the
launcher
and
letting
you
sort
of
like
stick
something
before
that.
A
E
So
there's
there's
been
a
separate
thread
that
we've
been
trying
to
solve,
which
is
builder
specified
configuration.
E
E
So
if
we
could
like
sort
of
capture
all
of
these
requirements
and
put
things
on
the
builder
that
we
wanted
to
end
up
in
the
output
image,
that
would
also
be
fine
with
us
so
like
if
we
could
define
that
if
the
builder
has
the
specific
binary
at
this
place,
the
launcher
should
just
invoke
that
before
going
through
the
default
route,
or
something
like
that
that
that
would
also
completely
work
that
way.
We
can
just
put
teeny
in
there
in
that
place,
and
then
we
don't
even
have
to
implement
our
own
report.
A
All
right,
I
think
that
sounds
good.
I'm
sorry!
I
have
not
like
written
up
a
more
thoughtful
thing
here.
I
keep
delaying
giving
a
good
response
to
this,
but
I
think
if
everyone's
okay
with
that
being
the
seam,
I
think
that
could
work.
I
like
that
better
than
either
modifying
the
launcher
to
do
this
or
using.
E
B
Yeah,
like
I
could
see
this
case
being
in
front
of
launcher,
but
I
do
think
we
still
probably
need
something
to
eventually
like
kind
of
post-process,
the
processes
that
have
been
defined
at
a
platform
level,
maybe
as
well.
I'm
not
sure
whether,
where.
B
That,
like
you,
know
stuff
like
this,
that
can
sit
in
front
and
you
know
sort
of
side,
car,
ish,
behavior.