►
From YouTube: Working Group: August 2nd 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Could
I
get
a
designated
Note
Taker
for
today,
I'll,
take
notes
awesome.
Thank
you
very
much.
A
I,
don't
think
there's
any
new
faces.
Sophie
might
be
back
at
some
point.
A
In
that
case,
we
can
hop
right
into
outstanding.
Rses
first
up
will
be
decouple
dependencies
from
from
the
build
packs.
This
has
had
a
fair
amount
of
active
discussion
inside
of
a
slack
channel
that
Dan
set
up.
A
That
is
public
and
just
go
there
and
read
the
mountain
of
conversation
that
we're
having
I
think
that
we're
sort
of
narrowing
in
on
what
we
would
want
to
put
out
for,
like
an
initial
proposal
to
at
least
set
up
a
framework
or
interface
for
this,
it's
a
little
bit
difficult
to
kind
of
a
line
on
exactly
what
that
should
look
like
at
this
moment
and
Yan
is
on
vacation
for
the
next
couple
weeks.
So
this
might
slow
down
a
little
bit.
A
A
Next
up.
We
have
proposal
to
publish
multi-arch,
build
packs
and
Jericho
I
assume
that
there's
been
some
discussion
happening
in
slack
here
and
there,
but
I
don't
know
where
any
of
that's
gone,
because
I
have
nothing
monitoring
this
at
all.
B
Yeah,
that's
fine,
I.
Think
the
the
last
kind
of
like
takeaway
that
we're
we
kind
of
agreed
on
is
that
we
need
to
basically
get
like
stacks
and
Builders
created
in
order
to
actually
test
this
stuff.
So
in
order
to
move
forward
with
this
I'm,
actually
just
focusing
on
that
right
now,
I
have
not
made
an
update
to
the
RFC,
but
I
can
and
just
put
that
update
in
there
and
say
we
need
to
focus
on
this
stuff
and
I.
B
A
If
you
don't
necessarily-
or
if
you
aren't
necessarily
eliciting
active
sort
of
feedback
on
it,
something
we've
done
in
the
past
is
just
draft
rfcs.
So
we
just
go
and
just
draft
the
pr
I
don't
know
that
I
really
care
that
much.
The
only
thing
that
would
do
is
like
every
week
we
wouldn't
go
over
and
be
like.
Are
there
any
updates
to
the
the
multi-arch
billpack
RSC?
A
A
B
Okay,
well,
if
anyone
in
the
potato
world
has
a
preference
on
how
I
should
approach
that
feel
free
to,
let
me
know
and
just
like
when
you
say
draft,
this
is
just
adding
a
draft
label.
A
Oh
I
know
you
can
literally
just
come
here
and
you
click
convert
to
draft.
Oh
it'll,
just
turn
it'll
just
put
the
it'll
just
make
it
so
that
you
can't
merge
the
RFC
and
then
one
you
can
continue
to
comment
on
it,
like
you
know
normal,
like
a
normal
PR
and
then
when
you,
whenever
you're
ready,
you'll
just
click
like
you
know,
ready
for
finalize
and
it'll
be
open
back
as
a
normal
PR.
Again.
A
B
So
yeah
I
think
I.
I
may
just
do
that
in
that
way.
I
I
just
don't
know
what
this
is
gonna
end
up
being
once
we
kind
of
get
all
the
other
stuff
that
we
need
in
place.
I
assume
it'll
be
similar,
but
who
knows
we
might
have
to
make
some
changes
so
to
save
a
review
every
week.
I'll
just
put
it
in
the
draft
and
then
when
we're
ready
to
get
some
more
input,
then
I'll
take
it
out
of
that
yeah.
A
It
sounds
like
at
the
end,
this
is
probably
going
to
be
less
of
a
proposal
of
like
here's,
the
work
that
needs
to
be
done
and
more
of
I
have
some
stacks
and
Builders
like.
Are
you
willing
to
either
like
like?
Let
me
donate
them
or
like
spin
up.
You
know
like
like
a
lot
of
the
implementation.
A
Work
I
think
is
going
to
kind
of
be
no
longer
an
issue
on
this
so
yeah
it
that
sounds
like
it
would
work,
just
fine,
okay,
cool,
thank
you
and
then
finally,
we
have
proposal
to
introduce
new
crawl
VM.
A
It
still
needs
CLA
approval,
and
it
appears
that
there
is
active
discussion
on
this
from
java
maintainers.
Is
there
anything
that
anyone
here
would
want
to
talk
about
further
on
this.
A
Okay,
in
that
case,
gone
through
all
of
the
open
rfcs.
Do
we
want
to
go
back
to
any
of
these
and
talk
about
any
more
of
them.
A
Nope
all
right
next
up,
CNB
updates
and
questions
I
think
that
oh
hold
on
Dan
posted
in
one
of
the
paquetto
slack
groups
around
a
discussion
that
is
being
had
inside
of
CNB.
That
is
eluding
me
right
now.
A
Yeah
I
there
there
is
a
discussion.
I'll
come
back
and
try
and
put
it
there,
but
there's
like
an
open
discussion.
Looking
for
Bill
pack,
author
feedback.
C
A
At
that
all
right,
perfect,
thank
you.
I
can
actually
take
a
look
at
this
because
yeah,
so
it's
a
CMP
interoperative
interoperability
after
the
removal
of
stacks.
A
A
Anything
else,
so
we
can
move
on
any
project
updates.
A
The
only
it's
not
really
a
project
update
the
only
thing
I
wanted
to
like
quickly
highlight
just
for
those
there
I
was
able
to
get
the
initial
feature
PR
for
the
sort
of
for
the
base
Ubi
stack
to
like
pass
the
smoke
test.
That's
present,
Rob
I,
just
called
out
that
I
wanted
you
to
take
one
last
look
and
then
I
think
we're
going
to
go
ahead
and
merge
that
and
hopefully
have
a
release
of
the
initial
base.
A
Ubi
Builder
out
I,
don't
want
to
say
by
the
end
of
the
day,
because
that
feels
really
generous.
But
you
know
sooner
rather
than
later,
I
I
wanna
I
wanna
get
an
actual
release
out
so
that
we
can
sort
of
be
in
a
stage
where
we
can
start
like,
showing
it
to
people
and
being
like
hey.
How
cool
is
this?
Ubi
am
I
right,
guys.
D
Yeah
I'll
be
cool,
we've
been
working
on
the
Java
side
of
stuff,
I
finished
a
whole
pile
of
updates
to
live
jvm's
a
two
point
x,
Branch
to
that
as
well
to
bring
that
up
to
lip
CMB
2.x
and
live
pack
2.x.
So
that's
the
whole
tree
of
all
the
various
Poquito
tooling.
That's
now
has
two
point
x
branches
that
support
extensions
running
through
because
packet's
been
done
separately,
so
I've
done
lip
CMB
lip
pack
pipeline,
Builder
and
jvm
have
all
been
brought
up
to
have
fat
support.
D
I
mean
none
of
this
is
officially
released
yet
they're
all
there
depending
upon
commit
hashes
between
each
other
in
go
projects,
but
it's
enough
that
it.
It
totally
changed
the
code
that
I'd
written
for
the
Ubi
Java
extension,
which
was
an
absolute
disaster
Zone,
and
it
now
looks
a
lot
lot
cleaner
because
it's
just
you
know
basically
working
a
lot
like
all
of
the
other
Java
based
build
packs
do,
except
as
an
extension,
to
ask
to
do
slightly
different
things.
D
It's
got
some
more
fun
coming
up
with
that
with
how
I
do
the
environment
configuration,
because
all
the
build
packs
the
data
there
do
their
configuration
by
creating
layers
and
putting
the
appropriate
environment
stuff
into
there
and
extensions
aren't
allowed
to
create
layers,
but
I
think
I've
got
to
work
around
I.
D
Think
if
I
create
all
of
my
layers
in
the
Builder
image,
I
can
have
a
build
pack
just
copy
them
into
the
Run
image
later,
which
will
be
having
copy
it
into
the
layers
directory
during
build
pack
execution
and
then
build
packs
puts
it
into
the
initial
correctly
so
yeah
making
great
progress.
Basically,
thanks.
A
It
might
be
the
that
that
that
it's
extensions,
not
being
able
to
write
layers
is
a
little
aggravating.
It
might
be
worth
us
going
up
to
C
and
B
and
like
even
if
it's
like.
We
can't
write
layers
like
that.
The
fact
that
you
can't
set
environment
variables
without
writing
layers
is
a
little
bit
frustrating
so.
D
I
mean
you
could
set
them.
You
can
set
environment
variables
straight
into
the
Builder
or
run
image
if
you're
doing
run
image
modification,
you
can
add
end
lines
to
your
run
image
directly,
but
the
problem
is:
if
you
want
rebasing
to
work
and
you're
staying
with
switchable
base
images,
then
you'd
have
to
you
can't
modify
anything
during
the
build
phase.
D
I
think
the
solution
I've
hit,
might
work
or
I've
got
to
do
some
real
experimentation
for
the
rest
of
this
week
and
see
if
I
can
figure
out
what
the
context
directory
is
when
it's
running
those
builds,
because
if
it's
possible
to
have
the
generate
phase
right
files
that
the
docker
files
can
then
include
into
the
Builder
image,
then
we
could
probably
standardize
within
Poquito
or
potentially
Upstream
within
cncf
to
say
you
know
if
we
use
slash
extension
layers
as
a
layers
directory
for
extensions.
D
You
only
need
one
build
pack
and
it
runs
first
and
it
says
hey
if
there
are
any
layers
present
in
the
extension
layers,
directory
I'll
copy
them
across
to
the
layers
directory
and
I'm
done
and
I'm
always
part
of
the
build,
because
I
always
participate
as
long
as
there's
an
extension
lines.
Directory.
B
C
Yeah,
just
that
I
can
work
for
I
said
the
CNB
team
are
like
always
interested
in
hearing
real
world
feedback.
A
lot
of
this.
They
develop
kind
of
like
either
in
isolation
or
like
with
toy
examples
right
because
they
they
don't
have
they're,
not
us
right,
they're,
not
like
actually
trying
to
writing
stuff
out.
So
they
are
very
open
to
the
feedback,
and-
and
you
know
they
do
want
to
hear
like
the
sharp
edges
and
what
works
well.
C
What
doesn't
so
I
would
definitely
encourage
you
to
like
when
you,
when
you
figure
out
one
or
two
paths
forward
right
like
write
them
out.
Summarize
them
share
them
on
the
CNB
team.
Let
them
know
which
ones
like
the
pros
and
cons
of
each
and
then
hopefully
that
will
improve
the
Upstream
experience
so
that
you
don't
have
to.
We
don't
have
to
do
as
many
like
machinations.
On
our
end.
D
D
The
keto
is
kind
of
unique
at
the
moment
in
that
it's
rather
a
nice,
interconnected
framework
of
build
packs
that
we're
attempting
to
connect
extensions
to
and
I
think,
that's
probably
where
the
parts
of
the
original
cncf
spec
for
extensions
were
lacking,
because
they
didn't
really
look
at
the
interplay
between
how
extensions
would
connect
to
build
packs
in
that
manner.
Yeah.
A
B
That
was
me
so
I
I
have
a
PR
in
I.
Think
it's
Jammy,
tiny
stack
I
can
remember,
remember
what
the
order
is,
but,
and
there
is
an
integration
test
where
it
does
just
a
basic
pack
build.
B
What
I
wanted
to
kind
of
bring
up
here
is
that,
like
I've
I've
created
some
repos,
where
I've
done
some
of
this
stuff,
you
know
not
using
jam
and
any
of
the
pack.
You
know
paquetto,
tooling
and
I'm
actually
running
a
kind
of
an
integration
test
or
whatever
using
build
X.
So
it's
a
little
bit
meta
but
you're
running
back
inside
of
build
X
and
because
Pack
for
some
things
doesn't
actually
need
to
talk
to
the
Damon
to
the
docker
Damon.
B
It
actually
just
works,
and
then
you
just
get
this
multi-arch
kind
of
like
pack
environment
that
you
can
test
with.
So
what
I'm
kind
of
trying
to
get
some
input
on
is
if
we're
building
multi-arch
Stacks?
Would
you
want
to
test
them
because,
right
now
the
integration
test
only
tests,
AMD
64.,
and
so
would
you
the
code
is
going
to
change.
But
would
you
like
that
and
should
I
do,
that.
C
Without
looking
at
this,
the
code
that
you
haven't
committed
yet
right
like
it's
hard
to
say,
but
my
instinct
is
that
we
would
want
to
treat
I'm
64
at
the
same
level
of
support
as
AMD
64,
at
least
at
the
stack
level
right
like
each
build
pack.
You
know
language
family
can
make
their
own
decisions,
but
I
think
the
Stack's
like
they're,
pretty
fundamental
right,
so
we
have
to
like
actually
execute
exercise
them
their
functionality,
test
them
Etc.
So,
like
I,
think
I.
C
Don't
maybe
this
isn't
answering
your
question
but
in
my
mind
like
and
a
desirable
end
goal
is
like
I
can
run
the
Stack's
gonna
the
stacks
integration
test
like
build
the
stack
right
and
then
they
do
the
metadata
test.
That's
fine!
That's
not
really
an
integration
test,
but
we
just
have
to
build
the
image
in
order
to
check
the
metadata.
C
The
interesting
test
is
like
can
I
run
a
pack
build
with
this
and
I
think
like
it
doesn't
have
to
be
like
for
the
first
iteration,
but
I
I
wouldn't
feel
comfortable,
calling
like
stack
support
for
arm64
done
until
we
have
like
the
ability
to
run
those
tests
programmatically
in
CI
against
every
change
right.
Otherwise,
like
I
worry
that
we
get
that
kind
of
like
unit
test
problem
where
it's
like.
Well,
it
works
in
a
bubble
right.
C
Going
to
do
but
what
it
says
it's
going
to
do
isn't
actually
a
real
behavior.
So
I
guess
like
does
that
answer
your
question
I'm,
not
sure
if
it
does
no.
B
C
Yeah
and
in
my
super
Ideal
World,
it
would
be
like
abstracted,
and
so
this
is,
you
know
like
similar
to
how
we
test
multiple
builders
in
the
build
packs.
We
just
have
like
a
list
of
architectures
in
the
stack
tunnel
and
the
integration
test.
Obviously
yeah
dynamically
like
black
across
all
of
them,
and
then,
if
and
when
we
ever
add
more
architectures.
It's
like
nothing
changes
right.
C
The
tests
don't
need
to
change,
to
support
that
the
pack
build
testers,
need
to
change
just
to
support
that
there's
less
risk
of
forgetting
to
add
a
new
test
for
a
new
architecture,
maybe
I'm
over
optimizing,
because
we
don't
really
add
new
architectures.
That
often,
but
we
can.
We
can
debate
that
when
we
see
the
code
but
yeah
I,
think
if
it's
possible
to
get
like
a
pack
build
like
end-to-end
pack
build
test
in
the
stack
integration
test.
B
Yeah,
it's
possible
I've,
already
done
it,
so
I
just
need
to
kind
of
like
make
it
work
within
the
picado
kind
of
code
base.
So
just
another
two
two
things
the
first
one
is
I'm.
Gonna
need
to
use
a
builder
to
test
this
stuff
and
like
right
now,
I've
already
creating
a
simple
Jammy,
multi-arch
Builder
that
I
can
use
for
testing
and
I
probably
will
just
use
that
initially.
B
But
then,
once
we
get
this
working
I
would
just
you
know
we
would
have
the
kettle
build
one
just
use
the
piccato
one
that
you're
publishing
for
that
purpose
or
whatever,
like
a
base
like
a
build
packless
kind
of
Builder.
It
does
nothing
other
than
just
provide
the
Builder
and
then
so
that's
the
first
thing.
The
other
thing
was
and
so
I'm
getting
nod.
So
that
sounds
good
all
right
and
then
the
other
thing
was
if
I'm
going
to
I
feel
like
I
just
lost
it.
B
Oh
yeah,
okay,
this
is
what
it
was.
A
lot
of
the
testing
is
done
with
ocam
like
you're,
using
Docker,
so
I
need
I
would
need
to
run
Docker
build
X
one
way
or
another,
and
it
seemed
like
logical
to
me
to
go.
Add
Docker,
build
X
kind
of
command
to
ocam
I,
don't
know
if
I'm
pronouncing.
That
correctly.
B
C
Yeah,
we
I
think
I,
think
that
would
make
sense
and
just
just
for
the
record
I
think
we
pronounce
it
Occam
after
Occam's
razor
there's
a
history
in
the
bill,
packs
team
of
having
testing
Frameworks
named
after
Sharp
Tools
like
swords
and
they've
gotten
smaller
and
smaller
over
the
years.
So
it
started
off
as
like
Cutlass
and
then
machete
and
then
now
we're
on
Occam's
razor.
C
So
that's
the
ridiculous
naming
scheme
behind
it,
but
yes,
I,
think
Arkham
would
be
a
great
place
or
adding
Docker
X
supports
Arkham
makes
sense
to
me,
given
that
that's
sort
of
its
purpose
right
like
if
we're
saying
that,
like
the
testing
of
the
build
packs
and
stacks,
requires
darker
build
exit.
That's
what
Occam
should
learn
to
do
right,
so
yeah
yeah.
B
C
Then
and
then
let's
go
back
to
the
first
point:
Builders
I
I,
you
know
I
confess
I,
haven't
thought
about
this
at
all,
but
I'm.
Assuming
that
the
end
goal
would
be,
the
builders
would
also
be
multi-arch.
So
you're
not
talking
about
making
a
new
like
like
repo
and
Docker
Hub
like
entry
for
a
new
build
pack
list
Builder,
it
will
be
just
extending
the
Jammy,
tiny
Bill
packless
Builder
to
have
multi-douch
support
right
right,
yeah
yeah,
then
that
makes
total
sense
to
me.
B
Cool
all
right
yeah-
this
is
great
feedback,
I,
think
yeah
I
think
that's
all
I
wanted
to
cover.
So
thank
you.
A
Great
give
everyone
some
time
back
and
yeah
a
great
rest.
Your
weekend.