►
From YouTube: Working Group: 2022-04-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone
to
the
pacquiao
working
group
meeting
today
is
april
the
12th
and
well
we're
glad
that
you're
here
feel
free
to
add
yourself
to
the
attendees
list.
In
case
we
have
done
so.
A
Great
okay
new
faces-
I
don't
think
so,.
A
A
D
So
I
haven't
been
here
for
a
couple
of
weeks,
just
been
just
been
around
busy,
it
looks
like
it
has
plenty
of
approval.
E
A
A
B
C
A
Okay,
next
up,
there
is
what
I
believe
is
a
new
rfc.
F
D
Where
is
the,
where
is
this
rfc
located
like
in
the
file
structure
of
the
rfc's
repo,
because
anything
that
should
be
that
is
in
the
unless
this
is
like
meant
to
be
a
top
level
rfc?
If
this
is
nested
inside
of
the
php
rfc's
folder,
it
should
be
mergeable
by
just
the
php
maintainers
after
their
approval.
F
E
I'm
glad
we're
getting
this
out
here.
I
feel
like
this
is
a
great
one
for
group
bike
shooting
when
it
comes
to
the
names
of
labels
and
stack
ids.
I
have
just
finished
leaving
a
couple
comments
here
of
bike
should
conversations.
I
think
we
should
have
I'm
not
saying
that.
I'm
totally
wedded
to
one
answer
or
the
other
my
mind.
The
first
one
is
we're
going
with
io
paquetto.
E
Rather
than
I
o
build
packs,
it
kind
of
makes
sense
because
it's
you
know
the
bill
packs
project
to
find
things
in
the
I
o
bill,
packs
name
space
and
they
haven't
and
there's
only
real
mixing
definitions
for
bionic
stuff.
Like
that,
I
would
argue
that,
despite
the
fact
that
makes
logical
sense,
we
might
want
to
use
it
anyways,
because
the
project
is
moving
away
from
stack.
So
it's
not
going
to
define
more
specs
and
this
would
be
consistent
for
our
users.
A
E
I
think
it
would
have
to
mean
you
know
a
stack
built
on
top
of
jammy.
It
would
be
very
strange
otherwise,
but
as
long
as
we're
not,
I
think
the
only
thing
that
you
could
possibly
imagine
there
being
a
contradiction
for,
and
I
don't
think
this
would
happen-
is
in
nixon
definitions.
But
if
we're
sort
of
not
even
trying
to
apply
those,
then
there's
no
danger
there.
B
G
I
should
know
this
as
a
stax
maintainer,
but
like
what
are
the
ramifications
of
like
having
the
id
be
in
the
build
packs
name
space
versus
the
keto
like.
I
don't
really
know
why
that
makes
a
difference.
E
It
matters
in
a
world
where
we
thought
these
stack,
ids
and
mixing
concepts
were
going
to
be
more
important
where
that
screen
also
implied
a
bunch
of
things
about
the
environment
that
you're
running
in
so
that
you
know,
when
you
put
it
in
your
build
pack,
tumble
you're
saying
I
can
run
an
environment
that
has
x
index
features
and
if
it's
in
the
build
pack
namespace
and
they
decide
to
publish
a
document
describing
what
all
the
assumptions
that
go
into
the
contract
that
go
with
that
string
are
that
could
contradict
the
assumptions
we
were
making
and
then
it
gets
awkward.
E
D
E
Rather
it
be
optimistic
and
then
see
things
fail,
because
they're
failing
rather
than
doing
a
lot
of
aggressive
checking,
especially
because
the
things
that
are
being
aggressively
checked
like
stacks
and
mix-ins
are
sort
of
metadata
that
need
to
be
generated
by
the
stack
author.
It's
not
smart
enough
to
really
know
anything
other
than
what
you
put
on
there
and
labels.
E
So
it's
not
like
it
was
actually
checking
the
underlying
compatibility,
for
you
is
just
checking
whether
all
the
generated
metadata
matched
up
and
then
you
know
a
lot
of
folks.
You
know
outside
the
picture
project
weren't
even
generating
this
metadata
so
like,
and
if
a
paketto
build
pack
is
like.
Oh.
D
E
Run
in
that
stack
because
it's
missing
a
mix
in
it's
not
because
it
really
couldn't
run
on
that
base
image,
it's
just
because
nobody
stuck
the
label
on
that
base
image,
that
kind
of
stuff.
So
there's
many
situations
where
the
existence
of
these
labels
were
preventing
things
that
could
work
from
running
what
you'll
get
from
removing
them
is
a
situation
where
you
can
optimistically
try
to
run
things
that
really
aren't
going
to
work
in
the
end.
But
that's
the
trade-off.
E
Yeah
and
then
it's
trying
to
add
a
little
bit
of
metadata,
just
things
like
the
os
version
and
I
think
there's
a
place
for
like
the
version
like
data
so
like
ubuntu
is
debian
like,
but
all
of
the
things
that
are
preserved
here
are
things
that
can
be
canonically
checked
within
the
image.
So
there's
like
a
file
called
os
info
that
comes
in
these
linux
distribution
images
and
you
can
compare
the
values
in
the
build
pack
to
what's
actually
in
the
image.
E
We
do
need
to
move
that
information
from
like
os
version
file
into
environment
variables
and
labels,
so
people
can
perform
checks
that
they
want
to,
but
they're.
It's
all
data
that
there's
a
right
answer
that
doesn't
involve
like
reading
a
white
paper
about
how
someone
defined
their
particular
stack.
D
E
E
I
think
there's
an
acknowledgement
that
part
of
the
reason
this
hasn't
been
implemented
anywhere
is
because
it
was
sort
of
left
in
an
unimplementable
state.
But
jesse
from
salesforce
on
the
cmb
team
is
now
going
back
and
writing
a
sort
of
migration
guide
and
trying
to
fill
out
what
exactly
steps
to
migrating
would
look
like.
D
D
E
B
He
describes
some
use
cases
where
there
are
build
packs
in
the
ecosystem
that
he
maintains
that
have
a
strict
dependency
on
the
stack
id
being
something
that
is
differentiable
from
the
full
and
base
images,
and
that's
something
I'd
like
to
get
rid
of.
It's
also
something
that,
like
from
the
cnb
specification,
is
clearly
going
to
just
disappear
at
some
point
in
the
future,
and
so
we
need
some
mechanism
whereby
we
can
basically,
in
the
build
phase,
indicate
what
things
are
available
on
the
run
image,
and
normally
this
would
have
been
done
through
mixins.
B
I
could
have
said
you
know
to
him
like.
Oh
there's
going
to
be
a
set
of
mix-ins.
You
can
see
that
you
know
the
the
mix-in
for
bash
is
only
available
on
the
build
phase,
and
so
that's
what
you
can
use
is
like
an
identifier
say
like.
Oh,
I
need
to
do
this
extra
work
in
my
build
pack
to
make
this
work
in
a
environment
that
doesn't
have
a
shell.
E
I
think
it's
a
very
interesting
question
when
you're
talking
about,
I
think
the
situation
is
slightly
worse
in
the
new
world,
but
it's
not
much
worse,
because
builtpack
could
never
see
the
mix-ins
before
anyways,
like
the
only
thing
you
had
to
go
on
was
the
stack
id,
so
you
basically
every
time
you
like
really
had
a
moment
where
you
needed
to
make
a
decision
like
that.
You'd
probably
need
to
make
a
new
stack
id,
and
the
cases
where
we
do
this
are
all
for
tiny
right
now.
E
It's
in
like
the
proc
file,
build
pack
is
one
example.
It's
like
am
I
launching
the
process
with
bash,
or
am
I
directly
executing
it
another
one
is
in
the
native
image,
build
pack
there's
certain
libraries
that
you
can
dynamically
link
against
on
base,
but
you
need
to
statically
link
into
a
java
native
image
on
tiny
the
situation-
and
this
is
one
of
the
things
that
I
want
to
bring
up-
is
not
really
different
in
the
new
world,
because
you
still
have
one
environment
variable
called
like
target
id
in
this
new
rfc.
E
E
I
don't
think
that's
a
great
solution
for
every
problem,
but
that
is
what
exists,
and
I
think
we
should
talk
about
what
we're
doing
for
jamie,
because
one
of
the
things
that
I
like
aesthetically
about
this
proposal
is
that
it
doesn't
have
a
different
stack
id
for
tiny,
but
I
worry
that
we
maybe
need
that
degree
of
freedom.
E
Unfortunately,
I
think
that's
true.
I've
been
trying
to
think
of
other
ways
around
it
like
could
we
put
you
know
some
environment
variable
on
the
stack
image?
Well,
it's
like.
Actually
the
bill
packs
don't
inherit
all
the
environment
variables
from
the
stack
image,
only
special
ones.
So
no
and
there's
other
places
where
this
has
come
up
a
lot
recently,
where
it'd
be
convenient
to
be
able
to
inject
config
at
the
builder
creation
level.
Whether
it's
like
I
want
different
defaults
for
my
environment
variables
on
this
builder
versus
that
one.
E
That
would
actually
be
a
really
nice
solution
to
our
problem
here,
because
we
know
when
we're
making
the
builder
what
you
know
set
of
stack
stuff
we're
using.
So
we
could
change
the
defaults,
but
but
none
of
that
exists-
and
we
need
to
make
jamie
snacks
right
now
and
I
don't
have
any
clever
ideas
other
than
what
we've
already
been
doing.
A
D
D
Feature
set
that
was
proposed
in
rse
a
while
back
to
allow
for
individuals
that
had
static
apps
not
have
to
actually
write
out
their
own
httpd
configs.
A
similar
line
of
work
is
in
the
pipeline
for
nginx,
but
that
is
live.
That
piece
of
work
is
live
now
you
can
go
to.
I
think
it's
httpd0.4.0.
A
Great
thank
you
for
yeah.
Please
put
the
link
there
awesome
all
right.
Next,
one,
this
one's
mine
pocket,
build
packs,
open,
ssf
batch
profile
is
like
what
is
that.
Well,
it
turns
out
that
open,
ssf,
open
source
security
foundation
is
the
new
home
for
a
set
of
best
practices
that
linux
linux
foundation
has
maintained
for
a
long
time
in
terms
of
yeah,
the
baseline
of
security
and
several
other
technical
areas
for
open
source
projects.
A
This
is
a
good
idea.
I
mean
it's
a
good
idea
to
have
a
batch.
That
shows
at
least
the
minimum
level,
which
is
passing
passing
level,
requires
completing
all
this
information
and
providing
evidence
that
I
don't
know.
For
example,
the
project
repo
include
interim
versions
for
review
between
releases
right.
We
are
not
obligated
to
say
that
we
meet
every
single
requirement.
Probably
there
are
some
that
not
apply
to
the
project,
but
we
need
to
provide
and
justification
in
that
case.
A
So
the
first
thing
was
to
have
a
profile
there
for
pacquiao
build
packs
here
in
the
list
of
projects
you
you
can
find
several
other
projects
at
different
levels
of
compliance.
A
You
can
find,
I
don't
know
node.js
it's
passing,
but
you
can
also
find
the
linux
kernel
itself.
It's
not
passing,
not
silver.
It's
go
little
right
now
in
terms
of
compliance
with
the
guidelines
for
community
engagement
and
vmware,
and
all
of
that
the
goal
is
to
reach
passing
grade.
A
Silver
level
is
at
stretch
goal
for
later
on,
but
right
now
the
goal
is,
is
to
complete
passing
level.
So
I
will
be.
This
is
completely
manual,
so
I
will
be
completing
the
information
I
have
about
the
project,
but
there
will
be
stuff
that
I
just
don't
know
and
we'll
have
to
have
a
session
with
some
of
you
to
complete.
I
don't
know
how
do
we
handle
security?
How
do
we
handle
code
analysis?
A
Something
like
that.
So
again,
we
don't
need
to
say
met
to
each
one
of
the
requirements,
but
we
need
to
say
something
how
how
how
do
we
handle
this
right
now
so
yeah?
That's
that's
one
of
the
many
actual
items
from
the
health
assessment
that
I
started
since
late
last
week
and
hopefully
we'll
be
able
to
present
you
all
very
soon,
but
this
is
just
part
of
many
things
to
keep
supporting
the
project
right.
D
Yeah,
so
this
has
been
something
that
annoyed
me
this
over
lunch,
and
this
afternoon
I
was
trying
to
cut
a
release
of
the
get
built
pack
removing
all
of
their
stack
identifiers
other
than
the
wild
card
operator.
D
I
got
approval
for
that
cut
the
release
and
I
discovered
that
currently
github
actions
are
degraded
and
when
I
cut
the
release,
no
action
was
launched
for
the
push
build
pack
action,
which
means
that
there's
no
publishing
of
our
build
pack
anywhere.
It's
just
a
github
release
at
the
moment.
This
is
particularly
annoying
for
this
particular
action
because
it
is
triggered
by
the
publish
workflow
like
like
the
publish
action.
So
when
you
click
the
publish
button,
this
workflow
is
triggered
by
that
and
I
assume
that
it
takes
information
from
that
event,
action
to
actually
run.
D
I
can't
trigger
this
action
any
other
way
so
like.
If
I
go
in
there.
Sometimes
you
can
manually
trigger
an
action.
Can't
do
that,
and
so
I'm
kind
of
now
left
with
a
limbo
release
on
github
the
the
get
build
pack
version
0.4.2,
or
something
like
that.
That
has
only
had
a
release
made.
I
posted
about
this
on
the
piccata
slack
inside
of
the
core
dev
channel
and
daniel
got
on
and
was
like
yeah.
This
happened
to
us
last
time.
There
was
some
storms
at
github.
D
D
I
wanted
to
just
in
a
room
full
of
I
think
a
couple
other
maintainers
for
the
utilities
build
packs,
see
if
that
was
an
okay
move
to
do
so,
I
will
go
ahead
and
go
forward
with
it,
but
I
guess
in
the
future
I
was
wondering
if
it
might
be
worth
our
time
to
look
into
like
do
an
audit
of
all
of
our
actions
and
see
that
there
is
a
way
that
if
github
has
a
storm
where
these
things
are
lost
to
the
internet
ether,
there
is
a
way
for
us
to
go
back
in
and
actually
unstick
ourselves
that
isn't
well.
B
I
would
say
what
dan
outlines
pretty
reasonable.
I
ended
up
doing
this
for
yarn
install
like
a
month
ago.
It
works
fine
when
you
delete
a
release.
The
tag
for
it
doesn't
get
really
deleted.
You'd
have
to
delete
that
also,
if
you
want
to
recreate
everything,
but
you
also
can
just
leave
the
tag
on
the
show
that
it's
on
and
then
you
can
like
recreate
the
release
saying
create
it
from
this
particular
tag
that
way,
you're
not
actually
like
moving
the
actual
contents
or
changing
the
contents
of
the
release
in
any
way.
B
Then
you
can
just
say:
create
new
release
by
like
clicking
the
button
to
create
a
release
on
github,
not
run
any
of
our
actions
like
manually,
recreate
the
release
by
just
clicking
new
release.
Then
you
just
paste
the
notes
in
and
upload
the
artifacts
and
you
say
create
it
from
the
tag
that
already
exists
and
then
just
say,
publish
release
that
should
work.
E
In
general,
I
really
like
it
when
a
ci
system
is
just
running
scripts
or
reusable
components
that
one
could
just
as
easily
run
themselves,
rather
than
it
sort
of
being
tightly
coupled
to
the
context
that
it's
running
in,
but
it
might
be
steps
from
here
to
there.
I
know
when
I
ran
into
this
problem.
I
just
ended
up
manually,
pushing
some
images
a
long
time
ago
on
the
java
build
packs,
and
it
was
not
reproducible
from
the
ci
config
at
all.
D
I
I
have
recently
gone
through
and
been
like
trying
to
cut
releases
of,
like
personal,
build
packs
that
I
have
that
I've
been
sharing
with
people
and
that
sort
of
thing
and
putting
them
on,
like
my
own
personal
repositories
and
the
cloud
native
build
packs
registry.
D
The
process
for
actually
doing
that
is
incredibly
clunky,
involving
like
three
clies
minimum
to
do,
and
it
is
a
giant
pain.
D
So
I
I
would
love
to
find
a
way
to
condense
that
into
a
more
better
thing.
But
I
don't.
I
don't
know
how
to
do
that.
B
I
think
this
has
always
been
like
the
bigger
issue
with
github
actions.
Is
that
the
reusable
components
the
action
parts
are
actually
pretty
nice
you
can
like,
basically
just
like,
can
be
containerized
as
like
a
docker
image.
You
kind
of
can
just
like
invoke
the
image
and
have
it
do
the
thing
you
need
to
do
with
some
reasonable
inputs,
but
anything
that
goes
into
like
the
workflow
file
or
where
you
like,
string
together
a
series
of
actions
that
enter
to
like
interrelate
and
depend
upon
each
other.
E
E
I
don't
know
how
well
maintained
it
is
or
how
well
it
handles
all
types
of
triggers
like
a
release.
Can
it
recreate
a
job
from
a
release
trigger?
I
don't
know
yeah.
H
It
might
be
worthless,
you
have
to
pass
it
like
a
json
valve
I've
mocked.
What
the
trigger
is
mocked
out
to
be.
We
we
use
it
quite
heavily,
but
only
really
similar,
explore,
requests
and
kind
of
like
those.
E
D
Yeah,
I
think
that's
worth
looking
into,
but
I
for
now.
I
think
I
it'll
be
easier
for
me
to
just
copy
and
paste
some
stuff
around
really
quickly
and
download
two
artifacts
to
my
computer,
but
it
does
does
get
into
the
larger
question
where
or
a
lot
larger
question,
and
I
think
you
guys
have
talked
about
this.
D
We've
talked
about
it
pretty
extensively
where
I
think
that
when
github
actions
is
running,
it's
very
nice,
it's
like
transparent
way
for
us
to
see
everything
and
a
way
for
users
to
more
actively
affect
what
the
workflows
are
looking
like,
but
it
it
really
does
like
to
just
sort
of
stop
without
any
warning,
and
then
that
leads
to
slowdowns
and
I
think
beyond
slowdowns.
It
leads
us
to
being
weird
in
between
phases
where
you
know
we
can
have
these
releases
that
just
kind
of
disappear
magically
and
it's
unfortunate
anyway.