►
From YouTube: Platform Sync: 2020-09-16
Description
* Status Updates
* Release Planning
* Automate creating GHA Windows-LCOW runner (ci#21)
* WCOW - pushing it back to GitHub
* pack 0.14.0 (discussion needed)
A
A
I
don't
know
what
happened,
but
my
video
background
is
broken
and
I
can't
seem
to
turn
it
back
on
and
so
now
you
could
literally
see
my
green
screen.
B
I
did
run
into
that
before.
If
you
have
two
zoom
meetings
open
but
probably
check
it
out.
A
A
All
right
as
a
reminder,
there
is
a
doc
associated
to
this
meeting.
If
you
could
go
ahead
and
add
your
name
creating
the
list,
real,
quick.
A
Okay,
let's
get
started
with
status
updates.
I'll
kick
us
off
from
my
perspective,
been
working
a
lot
on
the
buildpacks.io
website.
Some
changes
that
are
relevant
to
this
meeting
are
pac
is
now
a
top
level
tool.
That's
called
out
for
documentation.
A
So
if
you
go
to
buildpacks.io,
you
go
to
the
docs
section,
there's
a
tools
section
now
that
has
pack
listed
on
there,
and
so
our
hope
is
to
bring
more
focus
to
pack
as
a
tool
and
not
pack,
as
the
only
thing
that
you
could
use
within
cloud
native,
build
packs,
and
so
what
we're
gonna
be
doing
is
under
those
tools,
we're
gonna
at
the
very
minimum,
call
out
additional
tools
or
integrations
that
we
have
such
as
the
life
cycle.
A
B
I
was
going
to
mention
that
we
had
a
long-running
pr
open
that
had
been
kind
of
stalled
because
we
were
unable
to
get
the
some
of
the
tests
to
be
added
to
pass
on
the
github
actions.
Runner
that's
been
fixed,
but
it
did
require
some
production
code
changes.
But
I
will
open
up
the
pr
I'll
bump
the
pr
again,
but
that's
almost
in
a
ready
to
review
state.
A
When
you
say
production
code
changes
is
that
at
what
level
are
we
talking
about?
Is
it
pack
the
runner.
B
The
pack
in
the
and
the
pack,
internal
command
or
pack
internal
container
run
file
container
run
now
doesn't
do
docker,
log
or
container
logs.
It
instead
does
container
attach
and
then
starts,
but
that's
a
the
one
sentence
version
of
it.
I
can.
A
D
I
was,
I
was
gonna
say
I
was
trying
to
find
the
zoom
thing
which
totally
disappeared
in
my
computer.
But
I
was
spent
this
week
working
on
a
rfc
for
builders
trying
to
define
exactly
what
is
present
on
builders
and
diving
into
some
of
the
differences
between
heroku
and
paquito.
And
I
don't
really
look
at
clothes.
But
there
is
as
well,
and
that
is
up,
feel
free
to
check
that.
A
There
are
a
couple
items
that
we
probably
could
add
to
the
agenda
to
discuss
that
we
skipped
over
from
last
week.
I
think
david.
Some
of
those
pertain
to
you
so
I'll
add
those
to
the
agenda
and
we
could
discuss
them
at
longer
length
after
yeah.
After
we
talk
about
the
lcga
runner,
I
believe
anthony.
Do
you
want
to
kick
us
off
on
that
conversation?.
C
Yeah,
so
in
an
earlier
sub
team
sync
a
couple
weeks
ago
right,
I
think
we
had
expressed
interest
in
sort
of
automating
the
provisioning
process
of
these
runners,
just
because
when
they
you
know,
do
eventually
flake
out
or
whatnot,
we
just
would
like
to
easily
recreate
it,
and
I
basically,
I
put
up
a
pr
that
attempts
to
do
that
as
much
as
possible,
basically
uses
terraform
under
the
hood
to
talk
the
packet
and
do
all
the
steps
that
are
documented.
You
know
there's
a
couple
of
things.
C
You
still
have
to
start
docker
up
by
yourself,
and
you
know
this.
The
manual
steps,
I
think,
are
still
manual,
but
everything
else
is
sort
of
just
encapsulated
there,
and
you
know
you
might
find.
I
would
have
to
find
a
way
to
keep
secrets
out
of
the
open
source,
gate,
repo
or
whatnot,
but.
A
Interesting,
so,
if
we
just
started
curiosity,
I
haven't
looked
at
this
at
length.
Two
questions
that
come
up
so
we're
talking
about
some
manual
steps.
A
My
hope
and
assumption
is
that
these
manual
steps
at
some
point
could
also
be
automated
through
iteration,
or
is
there
a
hard
limitation
as
to
what
can
be
done
through
terraform?
That
would
prevent
that
from
happening.
C
Good
question,
so
I
believe
there
were
already
some
manual
steps
around
like
starting
pack
up.
I
sorry
I
never
was
able
to
see
that,
but
I
believe
natalie
said
that,
so
I
would
lean
on
you
all
for,
however
pack
is
the
pack
process
has
started
up,
but
for
initiating
docker,
specifically,
that
has
to
be
manual
at
least
from
what
I
could
see
that
just
not
installing
it
just
turning
it
on
the
first
time
and
setting
up
the
share
drives.
Maybe.
B
C
A
Right,
okay,
yeah,
I'm
pretty
sure
if
we
wanted
to,
we
could
do
make
a
dish.
Sorry
do
additional
research
to
see
whether
or
not
we
could
hack
around
it.
I
know
micah
is
infamous
for
hacking
around
some
of
this
stuff.
A
B
As
far
as
the
terraform
stuff,
too,
do
you
feel
like
any
folks
in
general,
feel
like
that
that
packet
might
be
a
valid
way
to
spin
up
non-ci
vms,
potentially
have
the
shared
windows
vms
that
all
contributors
are
not
all
contributors,
but
all
core
contributors
could
use,
or
do
we
prefer
to
use
like
generic
iss?
For
that.
A
I
see
what
you're
saying
weather
packet
yeah,
that's
a
really
good
question.
I
know
right
now.
As
far
as
the
cncf
is
concerned,
we
are
using
packet
solely
for
ci.
I
don't
know
if
there
would
be
any
sort
of
considerations
there
that
we'd
have
to
let
them
know
that
we're
also
using
it
for
active
development.
A
I
wouldn't
think
so,
given
that
we
already
have
at
least
a
google
cloud
account
a
gcp
account
that
we
can
use
for
development
as
well.
So
I
don't
know,
but
it's
probably
something
worth
bringing
up
to
the
cncf.
A
D
B
Yeah,
I
think,
like
the
it
feels
like
the
reproducibility
of
the
ci
failures
in
general,
would
be
a
nice
a
nice
to
have
feature,
but
if
we
only
have
one
set
of
ci
jobs
running
on
the
packet,
then
maybe
we
want
a
more
generic
solution
or
something,
but
I
feel
like
if
we
like
that
terraform
workflow,
though
it
might
be
worth
making
the
process
first
standing
up.
The
one-off
windows
machines
also
use
terraform.
A
C
Yeah,
I
was
thinking
that
belonged
in
the
wiki,
which
sort
of
I
don't
know
you
can't
put
in
pr,
but
but
yeah,
it's
literally
just
a
rapper
script
called
terraform
w.
C
You
know
if
you're
familiar
with
terraform
you
it
just
sort
of
funnels
all
the
commands
down,
but
does
the
prep
for
you
so,
but
that's
the
yeah
I'll
put
it
to
docs
in
somewhere.
Somehow.
A
D
C
Yep
completely
right,
I
just
think
we're
we're
still
in
a
state
where
we're
not
sure
whether
we
still
want
to
keep
going
with
these
w
cow
as
opposed
to
the
default
runners.
So
maybe
it's
too
early
for
that.
But,
yes,
it
could
easily
be
wcal.
A
What
add
that
to
the
agenda?
Because
I
was
curious
about
our
findings
on
the
wcal
github
solution
being
able
to
push
it
back
there
did
we
go
get
anywhere
with
that.
B
Yeah
we
good
to
dive
into
it
now.
Yeah
I'd
be
happy
to
update
that
yeah.
So
it
appears
that
that
should
work.
We
have
an
upcoming
story.
That's
going
to
actually
do
the
work
with
latest
work,
that's
on
main
and
make
sure
that
the
work
around
that
we
or
this
the
state
in
which
we
could
run
everything
on
the
on
the
vanilla
worker.
B
Sorry,
let
me
re
state
my
sentence.
We
did
at
one
point
get
maine
to
run
on
a
vanilla,
w,
cal
worker
all
worked.
Fine,
we've
subsequently
added
some
more
test
coverage
for
windows,
stuff
unskipped,
a
few
tests
and
those
tests
were
failing
again.
B
We
have
a
little
bit
of
work
to
make
those
skips
those
skip
tests
pass
again,
but
I
haven't
seen
any
clear
sign
that
a
vanilla,
w
cal
worker
will
not
work.
I
think
it's
very
likely
that
we
should
be
able
to
use
for
wcal
a
vanilla
worker,
so
that
would
leave
our
last
remaining
self-hosted
machine
as
the
lcal,
but
I
feel
like
there's
pros
and
cons
to
having
the
self-hosted,
w
cow
and
a
lot
of
the
cons
have
sort
of
gone
away
with
a
very
scriptable
easy
to
use.
B
Reproducible
self-hosted
runner,
but
I
think
there's
still
maybe
enough.
Maybe
enough
debate
to
be
had
on
either
side
which
one
we
would
keep,
but
the
nice
bit
is
to
switch
back
and
forth
to
one
to
the
other
is
really
a
low
effort
with
all
the
automation
work
that
anthony
put
in
and
potentially
with
you
know,
once
once
it's
working
on
a
vanilla
runner,
it's
unlikely
to
stop
working
again,
so
I
feel
like
we
don't
necessarily
have
to
make
the
choice
right
now,
but
it
seems
just
to
answer
your
question.
A
So
it
is
an
option
that
we
could
leverage
if
we
choose
to
and
yeah
speaking
to
why
we
might
want
to.
I
believe
the
wcal
recently
gave
us
some
issues.
I
don't
know
exactly
what
the
fix
was,
I
believe
was
it
dan
took
a
look
at
that,
kicked
it
maybe,
and
it
just
worked
so
we've
had
to
kick
it
a
couple
more
times
than
we'd
like
so
I'd
hate
to
via
cron
job
have
to
restart
it
every
night
as
a
solution.
So
if
someone
else
could
take
that
off
of
our
hands,
that'd
be
awesome.
B
B
A
A
Okay,
any
other
conversation
about
w
cows.
A
All
right,
let's
move
on,
we
have,
I
believe,
10
minutes.
We
could
cover
some
of
the
items
that
require
discussion
on
pack,
14
0..
I
went
ahead
and
add
up
to
the
agenda,
so
you
can
click
on
that
link.
A
Now,
if
we
look
at
just
specifically
the
discussion
needed
items,
there
is
the
first
one
up
top
207
here.
I
think
it's
probably
better.
If
I
just
share
my.
A
Screen
so
207
talks
about
adjusting
the
container
limits
on
a
build
container
david.
I
believe
you
had
some
further
insight
from
the
pacquiao
team
that
may
be
worth
having
a
conversation
around.
D
I
wouldn't
quite
said
so
much
in
point.
I
just
I
had
spent
a
bit
of
time
in
the
paquito
slack,
and
I
saw
that
someone
requested
being
able
to
adjust
container
limits
and
was
a
bit
confused
by
what
containers
like
sizes
they
got
was
so.
A
D
I
mean
definitely
not
so
much
research.
I
think
that
the
I
believe
it's
a
flag
we
can
pass
into.
D
Well,
I
guess
the
difficulty
is,
we
don't
quite
know
we
don't
deal
necessarily
with
with
the
docker
damage
directly.
So
it's
a
bit
hard
to
do.
A
Should
we,
then,
maybe,
instead
of
moving
this
into
like
an
actual
enhancement,
do
like
some
research
into
it
and
then
have
this
research
yield?
Some
sort
of
at
least
a
plan
on
possible
options.
First
make
sense.
C
D
I
at
the
very
least
wanted
to
give
a
you
know.
I
wanted
to
build
for
people
to
actually
check
it
out,
but,
as
it
was,
quiet
mode
was
not
kind
of
fulfilling
its
mission.
It
was
randomly
having
the
logs
from
the
fetcher
appear,
so
this
at
the
very
least,
takes
that
out
for
quiet
mode,
and
then
it's
just
adding
in
what
the
app
name
and
shot
is.
So
if
that's
useful
great,
if
not
at
the
very
least,
we
should
make
sure
our
quiet
mode
is
fully
quiet.
A
Cool
yeah,
I
mean
I
I
personally
like
what
came
out
of
this,
so
I
think
that
seems
reasonable
to
me.
A
We
could
move
this
over
remove
this
tag.
A
A
I
have
issues
with
the
in
progress
status,
but
we
could
take
those
on
later.
A
Should
log
lifecycle
arguments.
A
We
wanted
to
know
how
a
certain
life
cycle
phase
was
executed
and
environment
variables
was
part
of
that
conversation
right.
So
what
was
the
binary
executed?
What
was
the
argument
and
what
were
the
environment
variables
and
then
the
complexity
came
to
be.
When
we
talk
about
environment
variables,
there
are
environment
variables
that
are
being
passed
into
the
container
itself
that
might
have
to
do
with
authentication
and
whatnot,
and
then
there
is
another
set
of
environment
variables
that
is
being
passed
onto
the
disk
right.
A
So
I
don't
know
where
we
want
to
take
this,
but
if
we
want
to
like,
I
guess
I
could
see
this
going
both
ways,
one
of
them
being
the
iterative
approach
where
we
just
ignore
the
environment
variables
at
least
provide
the
binary
and
the
arguments
that
are
being
passed
in.
I
think
that
would
still
add
a
lot
of
value
and
then
add
the
complexity
of
environment
variables.
After
the
fact
where
we
display
that,
and
then
in
that
particular
issue,
we
do
kind
of
aggregate,
both
the
container
and
the
file
based
environment
variables.
A
D
A
So
we
have
the
docs
issue.
I
wonder
if
it's
worth
punting
this,
at
least
out
of
this
version
and
yeah
not
making
it
something
that
we
kind
of
anticipate
happening,
especially
if
we
don't
have
enough
feedback
yet
cool,
let's
try,
15
and
see
if
we
get
something
at
that
point
in
time
I
mean.
D
D
A
B
A
Continue
to
run
image
yeah.
D
B
Oh,
this
is
also
slightly
slightly
slightly
tangentially
related.
But
sorry,
if
we're,
if
anyone
wants
to
bail,
that's
fine.
So
we
we
stumbled
on
a
bug
in
windows
where
our
current
implementation
for
starting
build
containers
was
to
call
container
start
through
the
docker
api
and
then
call
container
logs
immediately
after
that.
B
But
there's
a
tiny
little
window
of
time
in
between
there
that
didn't
seem
to
matter
for
linux,
but
did
matter
for
windows,
so
you'd
actually
lose
some
logs
tests
would
fail,
etc.
B
We
tried
a
whole
bunch
of
different
approaches
to
try
and
make
it
work.
What
we
ended
up
doing
is
looking
a
lot
of
the
docker
cli
implementation
and
they
actually
like
so
for
a
command
like
docker
run.
It
doesn't
do
that
it
doesn't
do
the
start
and
then
logs
necessarily
instead
of
as
a
docker,
attach
followed
by
a
docker
run
and
then
streams
out
the
handles
to
standard
in
standard
out
directly
to
the
console,
or
you
know,
through
the
whatever
recognitions.
It
has
to
do
that.
B
So
you
can
see
in
the
pr
once
we
finally
get
it
open
again,
but
we're
proposing
to
change
that
for
all
os's.
It
does
seem
to
have
a
slight
speed
improvement
for
for
linux.
We
gotta
confirm
whether
that
is
actually
anything
significant,
but
I'm
just
kind
of
wondering
if
anyone
has
any
big
red
flags
with
potentially
doing
that
or
maybe
has
context,
and
why
the
current
implementation
was
chosen
over
what
was
happening
in
the
docker
cli.
A
B
I
think
that
we
ran
into
I
do
kind
of
want
to
do
a
little
more
poking
to
see
like
not
using
the
log
sub
system.
If
it
you
know,
there's
logs
that
are
lost
or
anything
like
that,
but
from
what
I
could
tell
attaching
prior
to
starting
the
container
pretty
much
guarantees,
you
never
lose
anything,
I'm
just
wondering
if
maybe
those
messages
aren't
kept
in
docker
logs
internally
or
something
like
that
or
but
I
just
saw
hypothetical
downsides.
I
can't
didn't
see
any.
A
Yeah,
no,
I
think
modeling
it
after
the
docker
cli
seems
like
the
right
thing
to
do
so.
I
appreciate
doing
that
research.
A
The
only
thing
I
I
could
think
of-
and
this
is
totally
out
of
line
to
this
specifically
but
there's
a
draft
pr
in
which
I'm
attempting
to
do
where
I
am
messing
with
some
of
the
order
of
operations
there
to
be
able
to
do
interception
right
during
the
life
cycle
phases,
and
I'm
just
thinking
out
loud
to
try
to
understand
whether
or
not
there
would
be
some
implications
there.
But
I
could
most
likely
handle
those
after
the
fact,
and
so
I
don't
think
that
should
be
a
blocker
at
all.
A
D
Windows
that
draft,
I
think,
has
also
been
up
for
a
bunch
of
months.
Hasn't
it.