►
From YouTube: CNB Weekly Working Group - 25 May 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
See
a
few
new
faces
james,
you
wanna,
introduce
yourself.
B
Hi,
my
name
is
james
ma.
I'm
a
new
face
do
stuff
the
most,
maybe
not
the
others.
I
am
currently
a
product
manager
at
google.
I
work
on
the
serverless
team,
and
so
I
thought
a
good
idea
to
start
showing
up
to
some
of
these
meetings.
We
use
bill
packs
in
a
few
ways
to
support
some
of
our
sas
products,
and
so
I
work
on
a
team.
That's
relatively
newly
formed
that's
easy
to
build
packs
in
some
unique
ways.
B
So
it's
a
bit
early
for
us
out
here
in
the
west
coast,
but
I
think
over
the
next
few
weeks
we're
going
to
try
to
attend
a
few
more
of
these
burger
sessions.
C
A
Thank
you,
javier
any
back
of
platform
updates.
D
If
I'm
not
mistaken,
we
are
waiting
on
that
patch
from
the
life
cycle.
In
order
for
us
to
make
some
of
the
changes,
we're
waiting
for
to
release
cool.
B
A
The
back
team
side,
we
switched
over
main
branch
to
v2
and
we
created
a
new
branch
for
the
v1
api
so
that,
if
we
wanted
to,
we
could
still
cut
releases
off
of
that.
But
all
of
the
new
development
will
now
happen
on
the
v2
branch.
We
haven't
got
a
release
yet,
but
we
just
wanted
to
switch
the
dev
branch
to
v2
the
dot
profile
belt.
Back
rfc
was
also
merged
and
there's
a
new
repository
for
that.
A
So
I
think
we'll
start
working
on
that
and
we're
hoping
to
use
the
and
bb2
for
writing
out
the
dot
profile
will
pack
that's
pretty
much
it,
and
I
think
we
also
have
an
announcement.
Aiden
was
newly
elected
the
learning
maintainer
as
of
last
night,
so
welcome
in
any
learning
team
update.
B
We've
got
a
meeting
tomorrow
and
what
we're
going
to
focus
on,
because
the
technical
oversight
committee's
asked
us
to
is
is
basically
the
onboarding
docs.
We
want
to
be
able
to
get
application
developers
build
packs
with
a
little
bit
more
ease
and
once
we've
got
that
done,
we'll
go
on
to
some
more
deeper
dive
stuff,
but
no
announcements
or
releases.
Yet.
A
I
think
we
also
had
a
really
nice
kubecon,
so
we'll
try
to
do
a
recap
in
one
of
the
future
meetings
about
the
feedback
we
got
from
users
and
attended
the
project
meeting
and
also
the
booth.
B
A
Handles
the
fact
that
it's
bid
one
or
like
the
applications
that
are
running
through
build
packs
orbit
one
and
how
that
affects
applications
running
in
production,
especially
when
it
comes
to
like
reaping
zombie
processes
or
handling
signals
to
child
processes.
A
So
I
think,
in
the
past
meetings,
we've
discussed
that
this
could
be
like
a
multi-step
approach.
The
first
one
was
to
sort
of
refactor
the
launcher
code
so
that
it's
easier
for
us
and
our
internal
fork
of
the
launcher
to
sort
of
introduce
the
changes
to
reap
child
processes
and
also
implement
signal
handling.
But
I
think
in
the
longer
term,
we
wanted
the
life
cycle
to
be
able
to
do
something
like
this
based
on
a
config
flag
in
the
builder
or
the
run
image.
A
A
If
I
remember
correctly,
folks
were
mostly
convinced
in
the
la
the
last
time
we
discussed
this,
and
they
were
fine
with
a
default
off
value
for
handling
this.
And
if
operators
wanted
to,
they
could
set
this
flag
to
true
in
their
build
or
like
run
inventory,
and
then
the
launcher
would
actually
take
care
of
reaping
child
processes
and
also
forwarding
signals
to
them.
B
A
A
Should
we
start
with
an
rfc,
and
I
think,
since
we
also
have
sort
of
a
version
of
this
working
internally,
we
can
also
post
the
the
actual
snippet
that's
required
to
have
something
like
this
supported
in
the
lifecycle.
I
think
it's
very
few
lines
of
code.
A
But
I
think,
does
an
rfc
with
this
code
sample
and
the
configuration
sound
good.
B
B
What's
next,
the
next.
A
D
Yeah,
I
know
there's
very
various
ways
we
can
go
about
it.
We
can
wait
until
the
rfc
for
the
sort
of
component
level
contributions
goes,
you
know,
gets
approved
and
goes
into
effect.
D
A
D
Yeah,
I
think
the
only
thing
that
I
know
we've
agreed
on
and
I
would
sort
of
put
as
the
caveat
here
is.
You
know,
just
sort
of
denoting
within
the
readme
that
it's
a
experimental
and
b,
if
you
need
any
sort
of
or
have
any
questions
comments,
concerns
go
ping
sam
right.
A
D
D
Yes,
yeah
and
I
I
believe
I
guess
that
would
be
under
my
purview,
so
I
would
start
the
voting
process
and
get
everybody
to
cast
their
vote
if
they
want
another
opinion,
cool.
B
B
B
C
This
is
sort
of
a
large
topic
and
I'm
not
sure
at
which
angle
we
should
approach
it,
but
just
to,
I
guess,
provide
some
context
right.
We
have
these
two
features
that
have
sort
of
been
progressing
toward
implementation.
C
You
know
properties
of
your
base
images
and
there's
also
this
sort
of
vague
sense
that
these
two
features
intersect,
but
we're
not
sure
exactly
how
and
docker
files
is
really
getting
most
of
the
mind
share
at
the
moment
separately
as
part
of
the
stack
removal,
we're
also
putting
certain
requirements
of
base
images
into
the
distribution
spec,
they
used
to
be
in
the
platform
spec
we're
moving
them
over
into
this
other
spec.
C
That's
supposed
to
be
somewhat
disjoint,
and
there
have
been
some
concerns
about
that
and
kind
of
wanting
to
talk
things
through
to
make
sure
that
we've
uncovered
all
of
the
edge
cases
that
we'll
need
to
be
accounted
for,
and
that's
kind
of
been
the
case
for
a
while,
we've
just
sort
of
said.
We
need
to
talk
about
stock
removal
and
then
we
never
do.
C
I
think,
usually
we
we
say
we
will
talk
about
it
when
docker
files
forces
our
hand
right
like
when
we
come
to
the
point
where
we'd
like
to
ship
this
thing,
and
now
we
realize
that
we've
got
to
think
about
stack
removal,
but
that
seems
somewhat
unsatisfactory,
so
I
just
put
it
on
the
agenda
to
see.
If
maybe
we
could
get
some
more
concrete
like
next
steps,
or
you
know
points
of
discussion,
but
unfortunately
I
don't
actually
have
anything
to
offer
there
beyond
what
I've
just
said.
So
anyone
have
thoughts
of
this
area.
D
So
the
the
current
feature-
I
I
guess
just
for
contacts
the
feature
that
you're
releasing
with
like
you-
know,
sort
of
this
experimental
set
of
apis.
I
believe,
does
that
sort
of
mandate
that
we
answer
the
question
about
stacks?
Do
they
interface
with
you
know
the
concept
of
stacks
or
targets.
C
The
first
phase
of
docker
files,
which
I
expect
will
ship
you
know
I
hope,
we'll
ship
pretty
soon,
because
it's
almost
done
will
be
the
ability
to
use
docker
files
to
switch
your
runtime
base
image,
and
you
know
in
theory
so
just
to
take
a
step
back
right
now,
when
we're
doing
validations
around
the
stack
validations
around
you
know,
the
mix-ins
requested
by
build
packs
are
satisfied
by
the
stack.
All
of
that
stuff
happens
at
the
platform
level.
C
The
spec
says
that
the
life
cycle
does
it,
but
it
doesn't
do
it,
and
so,
in
theory,
right
if
your
platform
has
opted
into
this
experimental
feature
and
your
runtime
base
image
changes
after
you
know,
you
run
detect
and
build
paths.
That's
right.
The
extensions
generate
the
docker
files
and
then
you
get
a
new
runtime
based
image.
C
Like
my
opinion,
is
it's
sort
of
up
to
the
platform
right?
The
platform
opted
into
this
experimental
feature.
The
platform
currently
has
the
onus
to
validate
the
mix-ins.
So
the
platform
you
know
can
just
do
another
validation
before
running.
You
know
build
or
export
or
whatever
if
they
want
to
use
this
feature
right.
I
think
in
the
future
you
know
we'll
push
more
of
that
responsibility
into
the
life
cycle
and
the
platform
can
do
less,
but.
C
I
don't
know
I,
I
don't
see
a
strong
need
to
add
additional
stuff
right
now
in
the
life
cycle,
but
you
know
obviously
I'm
not
a
platform
operator,
so
my
my
thinking
could
be
different
than
what
others
might.
D
D
It
will
okay,
so
I
I
think
you
know
I
don't
know
if
we
want
to
hammer
out
like
hey,
are
we
spec
compliant
because
it
sounds
like
we're
technically
not,
and
we
won't
be
so
I
think
we
probably
need
to
hash
that
out
at
some
point
right
either
update
the
spec
to
say
that
the
lifecycle
won't
be
doing
the
validation
and
the
plot
gets
the
platform's
responsibility
or
the
inverse
right.
But
at
some
point
I
do
think
we
should
align
there.
D
A
But
yeah,
and
at
that
point
like
if
the
life
cycle
is
not
doing
it
and
the
the
platform
has
to
inject
something
in
the
middle
of
these
phases
to
to
validate
mix-ins,
I
don't
think
that's
something.
We've
ever
asked
platforms
to
do
like
consume
intermediate.
D
A
I
think
part
of
the
reason
platforms
started
doing
it
on
their
own
was
they
started,
checking
it
during
builder
creation
time
where
the
life
cycle
doesn't
even
come
into
play.
I
think
if
you
wanted
to
move
it
to
build
time
where
you're
checking
it
for
each
image
build
operation,
then
the
lifecycle
could
potentially
validate
it,
but
right
now
we
just
validated
during
builder
creation
for
the
platforms
that
do
validate
it,
which
is
just
back
in
kback
right
now.
A
D
D
C
You
know
like
we're
just
kind
of
trying
to
triage
our
our
time
and
energy
here
right,
because
also
the
reality
is,
you
know
no
one's
really
stepped
forward
to
kind
of
own.
This
stack
removal
stuff,
and
so
it's
sort
of
the
case
that
if
we
make
it
a
blocker
for
docker
files
like
we're
just
not
going
to
ship
docker
files,
for
you
know
x
amount
of
time,
you
know
more
and
we've
already
kind
of
been
waiting
a
while.
C
D
A
D
E
Work
for
me,
yeah
and
in
the
case,
taking
just
the
one
scenario
that
I
care
most
about,
which
is
ubi8
based
images
where
we're
installing
the
dependencies
using
rpm,
only
extensions
can
install
dependencies
via
rpms.
So
if
you
disable
the
extensions
with
a
builder
that
relies
on
that,
then
you'll
have
build
packs
that
want
to
run
subsequently
that
will
find
that
their
dependencies
haven't
been
met.
So
the
build
will
fail.
D
Yeah,
I
I
guess
what
I
want
to
do
is
I
want
to
make
sure
that
we
satisfy
realistic
use
cases
right,
not
just
the
easiest
use
case
that
we
can
think
of
in
in.
If
some
of
that
means
that
we
should
iron
out
sort
of
this
validation
aspect
of
things,
then
I
I
feel
like
we
should
to
not
let
the
users,
whether
it
be
end
users
or
platforms
or
whatever
sort
of
in
a
chaotic
state
where
they
can't
fully.
You
know,
satisfy
the.
C
But
let
me
try
to
make
the
case
one
more
time,
because
I
feel
like
I
I
I
in
my
view
I
feel
like
there's
no
problem,
but
maybe
I'm
not
seeing
it
right,
so
you
you
can
think
of
like
on
the
one
side.
You
have
a
platform,
let's
say
like
kpac,
where
the
platform
operator
has
sort
of
full
control
right.
You
control
the
platform
and
you
control
the
builder
right.
So,
if
you're
going
to
provide
you
know
let
your
users
use
a
builder
that
relies
on
extensions
right
and
you
you
enable
extensions
in
your
platform.
C
You
can
ensure
that
the
extensions
that
you
provide
don't
allow
your
end
users
to
select
a
arbitrary
run
image
right
so,
but
the
platform
operator
has
within
their
control
the
all
the
means
necessary
to
make
sure
that
it
works
right
and
then
you
kind
of
zoom
over
to
the
other
end.
Where
there's
like,
let's
say,
pac
right,
where
the
end
user.
They
have
control
over
their
builder
right,
but
you're,
still
sort
of
saying,
like
use
at
your
own
risk
right.
C
D
Okay,
so
I
think
I
think,
there's
a
difference
right,
we're
talking
about
experimental
feature
where
I
guess,
maybe
I'm
thinking
about
how
to
move
forward
past
the
experimental
feature.
I
guess.
Ultimately,
I
don't
care
about
the
experimental
feature
right,
like
yeah,
it's
experimental
at
your
own
risk.
What
I
care
about-
and
I
thought
what
were
talking
about-
is
what
are
the
next
steps
after
that
experimental
phase
right
like
what
is
the
next
quick
follow-up?
A
What
would
validation
still
look
like
from
a
platform
basis,
because
at
the
end
the
drone
image
validation
would
depend
on
the
build
packs
which
was
selected.
So
let's
say
you
have
a
build
pack
that
requires
a,
I
don't
know
a
python
mixin,
so
it
requires
the
python
interpreter
to
be
on
their
own
image
and
then
one
of
the
run
images
has
python
and,
like
the
restaurant
does
like
maybe
a
scratch
image
or
a
node
image
whatever,
and
only
when
the
python
build
pack
is
selected.
You
want
that
mix
and
validation
with
the
python
run
image.
A
C
Having
the
life
cycle
do
the
validation
is
like
the
easiest
thing
right
like
just.
I
actually
don't
even
know
why.
I
think
the
reason
the
life
cycle
doesn't
do
it
is
because
it's
faster.
If
the
platform
does
it,
the
platform
can
doesn't
even
need
to
start
a
container
in
order
to
do
it,
I'm
not
opposed
to
doing
it.
I
just
think
it's
you
know.
C
D
A
I
think
an
alternative
option
is,
if
you
create,
if
a
plot
as
a
platform,
you
create
a
builder
with
extensions
and
with
this
experimental
api
you
just
implicitly
opt
in
to
know,
mix
and
validation.
That's
something
the
platform
agrees
to.
I
I
think
we
have
agreed
as
a
project
that
mix
and
validation
didn't
serve
the
needs
it
was
supposed
to,
and
it
was
overtly
strict
and
that's
why
we
got
rid
of
it
in
the
first
place.
D
Okay,
yeah,
that
makes
sense
for
experimental,
yeah
right.
A
Yep,
so
we
can
release
this
as
experimental.
If
you
create
a
builder
with
this
experimental
feature,
you
you
opt
into
the
fact
that
there
will
be
no
mix
and
validation
and
when
we
get
to
stack
removal,
we
can
figure
out
how
to
actually
deal
with
this,
and
at
that
point
the
experimental
feature
doesn't
become
stable.
A
Does
that
that
that
would
unlock
the
next
life
cycle
release?
That
would
mean
the
platforms
that
want
to
use
this
feature
but
are
agreeing
to
let
go
of
mix
and
validation,
which
I
you
know
I
don't.
I
don't
know
how
many
platforms
actually
care
about
are
actually
providing
correct,
mix-ins
and
then
using
that
in
buildback.
C
D
C
D
Yeah,
I
was
gonna
say
I
think
I
know
we
would
want
it
when
we
create
a
builder
to
some
extent,
but
for
that
reason
we
could
take
it
in
as
a
library
right.
So
we
would
ask
that
it'd
be
consumable
as
a
library
that
logic
and
then
everybody's
happy.
A
I
think
at
some
point
we
wouldn't
even
need
the
validation.
The
validation
was
there
to
prevent
cases
where
you
were
using
build
packs
that
needed
some
dependencies,
but
you
couldn't
make
those
dependencies
available
in
your
base
images
with
extensions
rather
than
just
validating
you
can
mutate
your
build
or
run
images
to
conform
to
whatever
your
buildback
needs.
So
I
think
the
extensions
are
sort
of
like
the
solution
to
getting
rid
of
mixins
once
and
for
all,
like
you
don't
need
to
validate
for
it
anymore.
E
Yeah,
I
think
the
fun
comes
in
when
you
start
considering
rebasing,
which
is
later
yeah.
Everything
up
until
that
point
is
fine,
because
you
know
it's
it's
the
rebasing
operation,
which
required
the
mix
in
validation
in
a
way
to
to
know
that
what
you're
about
to
do
makes
sense,
but
we've
already
got
some
of
the
stuff
in
the
dockerfile
rfc.
That
says
that,
basically,
certain
things
won't
be
rebaseable.
Unless
you
swear
blind
as
your
best
promise
as
a
developer
that
they,
it
will
still
work.
A
I
think
for
replaceability
we
had
introduced
this
concept
of
image,
families
or
image
family
identifiers.
I
I
don't
think
we've
actually
mapped
out
how
that
would
work.
A
So
let's
say
you
switch
like
during
this
whole
extension
stuff.
You
you
create,
you
switch
to
a
bison-based,
run
image
and
you
mark
that
as
rebasible
equals
true,
and
you
want
to
rebase
it
with
a
new
version
of
that
python
based
run
image.
You
need
to
know
which,
like
family
of
images,
it
comes
from
to
rebase.
It.
E
Yeah,
it's
it's
going
to
be
interesting,
but
I
mean
yeah,
the
dockerfile
rfc
at
the
moment.
Just
got
the
flag
that
says
rebasable,
true
or
not,
and
if
you
set
that
rebasing
is
allowed
and
the
implication
of
setting
that
is
that
obviously
you're
declaring
that
whatever
you
rebase
it
to
or
or
whatever
state
you're
in,
is
going
to
be
something
that
that
meets
the
requirements
of
the
application,
which
is
what
makes
sense
before.
So
it's
an
interesting
way
that
all
of
this
stuff
is
a
lot
more
connected.
But
yeah.
A
If,
if
we
still
want
to
preserve
a
rebase
with
this
experimental
feature,
then
I
think
the
only
way
to
go
about
it
is
what
natalie
said
with
like
having
pre-canned
run
images
in
your
builder
config.
So
if
you
change
the
builder
config
with
the
new
version
of
that
image,
then
the
platform
knows
how
to
rebase
it
so
like
for
platforms
like
kpac
that
are
declarative,
and
your
builder
config
dictates
when
to
trigger
a
rebase
operation.
A
You
can
update
your
builder
config
with
a
new
version
of
the
run
image,
and
it
can
actually
like
the
life
cycle,
puts
an
appropriate
metadata
in
the
output
image
anyway
to
figure
out
which
base
image
it
used
for
the
run
image.
So
this
way
it
will
know
that.
Okay,
I
changed
from
this
to
this.
So
all
the
applications
that
were
on
image
a
now
need
to
be
moved
to
image
b.
E
B
E
So
without
then,
I
think
that's
where
this
whole
conversation
comes
back
together
and
I
think
yeah
you're
right.
We
shouldn't
try
and
visit
this
until
after
we've
at
least
dealt
with
the
capability
of
allowing
other
run
images
and
then,
as
we
figure
out
how
to
take
that
from
experimental
to
full
we're
gonna
have
to
tackle
these
problems
as
we
go.
D
Is
there
anything
we
need
to
do
to
sort
of
notify
the
user
that
they're
foregoing
mixing
validation
for
the
experimental
api?
Is
it
like
documentation?
We
should
update
or
a
warning
that
the
life
cycle
or
pro
platform
should.
A
A
D
That's
a
good
point,
looking
at
it
from
pack
perspective,
if
we're
doing
this
on
the
create
builder,
obviously
we
could
do
it.
There.
D
C
There's
like
kind
of
two,
I
was
spending
some
time
yesterday,
sharing
this,
but
there's
two
concepts
of
experimental
that
we
introduced
a
long
time
ago.
I
never
implemented.
The
first
is
the
concept
of
an
experimental
api
or
an
api.
That's
not
ready,
it
will
subject
to
change
and
then
there's
a
concept
of
an
experimental
feature
which
is
basically
like
as
a
you
have
to
set
an
environment
variable.
So
you
could,
you
know
as
a
platform
or
as
an
app
developer
you
can
provide
pac
would
need
to
expose
a
flag.
C
I
think
to
you
know
whatever
turn
on
experimental
features
in
the
life
cycle,
and
if
you
haven't
set
that
variable,
you
will
not,
you
know
or
will
warn
or
something
will
happen
right,
there's
a
we.
I
can
get
the
rfc
there's
two
rfcs
that
that
that
deal
with
it
and
I
think
we'll
probably
want
both
for
docker
files
right
the
first
one.
You
know
the
experimental
api
is
more
of
a
protection
for
the
life
cycle
developer
right
but
like
we
will
not
have
to
maintain
infinitely
old
versions
of
this.
C
B
D
B
D
And
then
the
builder
should
have
an
environment
variable
that
you
mentioned
right.
Do
you
mind
linking
that
in
the
document.
C
We
we
spent
several
minutes
now
on
this
topic.
I
did
want
to
bring
up
just
sort
of
like
the
second
half
of
the
the
list
of
concerns
which
was
around
putting
stack
removal
stuff
in
the
distribution
api.
I
know
this
was
particularly
a
concern
of
emily's
and
no
she's,
not
here,
but
just
wondering
if
anyone
had
thoughts
about
that
kind
of
at
a
high
level.
D
C
I
think
you
know-
and
I'm
I'm
I
haven't
thought
about
this
too
deeply,
but
it
I
feel
like
at
first
glance,
it's
sort
of
taking.
You
know
two
specs
that
are
that
are
supposed
to
be
disjoint
right.
That's
why
they're
they're,
not
one
spec
they're,
two
and
really
saying
that
the
platform
spec
now
has
a
hard
dependency
on
the
distribution.
Spec
right,
you
can
say
like
this
platform.
Spec
requires
distribution,
spec,
0.3
or
above
right
or,
like
you
know,.
C
You
know
so
just
as
an
example
right,
the
distribution
spec
will
say,
like
your
builder:
image
must
have
a
label
indicating
like
io,
build
packs,
distribution,
name
or
something
like
that
right
and
we're
sort
of
that's
a
requirement
in
the
distribution
spec.
But
you
know
the
life
cycle
might
need
to
read
that
variable
in
order
to
provide
some
platform
features
right.
D
I
think
the
the
reason
why
we
moved
it
out
was
because
we
already
were
experiencing
this
issue
between
the
within
the
spec
itself
right
trying
to
remember.
If
there
was
anything
concrete
that
I
could
bring
up
as
an
example,
but
I
think
the
idea
of
separating
is
more
or
less
sort
of
like
artifact
versus
time
right
so
like
the
distribution,
spec
is
the
artifact,
the
thing
that
gets
stored
somewhere,
and
these
are
the
things
that
you
know.
D
If
you
want
to
distribute
sort
of
this,
this
artifact
that's
going
to
then
be
used
by
this
thing.
That
executes
build
packs.
Then
that's
what
the
only
thing
that
you
should
be
looking
at,
whereas
the
platform
api,
and
even
more
so
the
build
pack
api.
They
define
sort
of
this
interface
on
the
runtime
right,
and
I
guess
I
see
where
effectively
they're
almost
or
they
are
essentially
the
same
thing.
C
D
D
C
C
I
just
think
that
this
conversation
has
been
looming
in
our
minds
for
a
while
as
something
that
you
know
we
need
to
have,
but
we're
not
exactly
sure
what
we
want
to
get
out
of
it
and,
in
the
meantime,
we're
just
sort
of
delaying
doing
anything
about
it.
So.
C
D
B
Sound,
do
we
want
to
just
put
a
pin
in
this
particular
topic
until
I
think
emily
will
probably
be
back
next
week.
It
sounds
like
at
least
from
when
I
was
texting
her.
She
was
recovering,
but
not
quite
there
yet
to
be
in
meetings
and
stuff.
B
B
A
The
end
of
the
meeting,
thank
you
all
for
joining
we'll
see
you
next
time.