►
From YouTube: Platform Sync: 2021-01-27
Description
Meeting notes: https://bit.ly/38pal2Z
A
A
A
A
Okay,
I
doubt
anybody
else
is
coming
to
this.
So
let's
get
started.
I
guess
anybody
have
any
status
updates.
A
Let's
see,
I'm
dan
thanks
for
popping
out
that
that
pr
to
add
change
the
volume
labels
it's
nice,
we've
had,
that
was
nice,
going
knocking
out
some
of
our
older
issues.
Otherwise,
my
main
status
update
is
the
next
thing
which
is
release
planning,
which
is
that
we
are
looking
to
do
ccb
today.
A
This
is
pretty
soon
after
our
most
previous
release,
but
that
release
itself
was
was
deferred
by
a
bunch
of
weeks
through
the
holidays,
and
I
think
joe
wanted
to
see
some
of
the
build
pack
registry
changes
go
out
and
it
also
meant
I
didn't
have
to
change
the
calendar
schedule,
at
least
on
the
vmware
calendar.
A
So
if
anybody
has
has
any
any
issues
with
that
we
brought
up
in
slack
yesterday,
you
can
feel
free
to
mention
that
now,
but
otherwise
we
would
go
into
ccb
today
end
the
day
today
and
then
cut
a
a
start
with
an
an
rc
branch
and
then
cut
a
release
next
wednesday.
A
Yeah
I
mean,
but
bug
fixes
we
do
during
ccb
as
well
right
with
advisement
and
etc,
but
we,
I
think,
features
we
try
and
get
in
now.
So
I'm
trying
to
knock
out
a
thing
for
releasing
it
to
apple
silicon,
which
maybe
we
do
anyways
during
ccb,
because
that's
more
distribution
unless
they're
actually
feature
work.
But
that
is
one
thing
we
are
trying
to
knock
out.
A
Okay
and
if
no
one
has
anything
else,
we
can
go
into
the
needs,
discussion,
issues
and
pack
don't
know.
Should
I
share
how
should
this
work?
Let
me
share.
A
Okay,
everybody
can
see
so,
let's
open
this
loop.
Okay,
so
there's
this
environment
variable
is
not
accessible
at
runtime.
This
is
actually
an
interesting
issue
that
would
be
nice
to
get
some
feedback
on.
Someone
wanted
to
set
assume
that
when
you
would
set
dash
dash
end
with
pact,
those
variables
would
also
be
accessible
at
runtime
and
not
just
during
the
build
process.
A
I
think
javier
had
a
had
a
discussion
with
him
and
the
question
was
either.
Should
we
just
make
this
more
explicit
in
our
logs
to
make
it
clear
that
it's
just
a
a
build
time
and
not
run
time
or
is
there
any
or
is
there
actually
a
feature
to
to
have
them?
You
know
to
make
variables
that
would
be
passed
to
the
image.
C
Do
we
have
a
current
solution
for
passing
runtime
environment
variables
like
would
it
be,
I
guess,
a
build
pack
that
could
take
environment
variables,
and
I
mean
you
could
totally
work
around
it
that
way,
but
that
seems
not
fun
but
as
far
as
I
can
recall,
there's
no
mechanism
for
transposing
variables
that
you
want
set
on
the
runtime
image.
D
A
That
make
sense,
so
I
will.
C
I,
for
one
would
like
to
have
this
feature
right
like
if
I,
if
I'm
envisioning,
something
like,
let's
see
a
java
spring
application,
where
it
makes
use
of
environment
variables
per
let's
say
even
profile
right
and
I
have
multiple
profiles,
the
worker
versus
I
don't
know
web
processes,
then
I
think
right
now.
We
already
have
that
capability
for
buildbacks
to
be
able
to
specify
environment
variables,
runtime
environment
variables
per
what
is
it
called
yeah
per
process?
C
So
I
would,
I
guess
I
don't
see
why
we
wouldn't
be
able
to
set
those
same
type
of
configuration
elements
at
build
time
or
yeah
like
from
a
user's
perspective.
A
I
do
feel
like
it's
better
I'd,
rather
not
have
you
know
multiple
things.
I'd
I'd
rather
reduce
the
the
feature
surface
because
it
can
be
done
with
buildbacks
currently
or
at
the
very
least.
You
know.
I
assume
it
should
be
a
pretty
simple.
You
can
have
an
enfield
build
pack.
I
don't
know
if
some
of
those
exist
already
well.
There
is
a
proc
file
built
back
in
pocato
right.
C
I
guess
what
I
could
envision
is
the
project.tamul
right.
It
already
has
a
build.end.
A
run.n
per
process
doesn't
seem
too
much
of
a
stretch,
but
then
to
jesse's
point.
It
would
just
be
the
what
is
it
called
the
order
of
operations
like
which
one
takes
precedence,
but.
B
D
D
C
B
B
C
B
C
D
B
C
Is
is,
is
the
binding
contract
for
everything
right,
and
I
think
that
that's
why
the
project
tamil
over
time,
I
could
definitely
foresee
us
as
being
part
of
the
core
spec
just
because
it
does
seem
to
be
like
that
reproducibility
concept.
That's
that
you
could
carry
throughout
various
build
tools
or
build.
B
Implementations
yeah,
I
think
I
was.
D
A
Yeah,
I
don't
know,
definitely
up
for
a
discussion,
but
I
think
that
for
this
issue
I
will
at
least
keep
the
issue
for
the
for
the
copying
thing,
but
then
note
that
the
feature
would
require
an
rrc.
This
javier
said
you
should
be
using
which
sync
up
this
is
about
being
able
to
update
the
doc.
A
The
docker
url,
like
the
docker
mirror
registry,
in
order,
so
that
you
can
use
something
when
you
want
it
to
be
for
public
registry,
but
then,
if
you're
an
enterprise,
you
should
be
able
to
do
it
with
all
right.
I
wanted
to
be
from
this
gcr.
D
C
A
C
Recall,
like
I
think
here,
just
to
let
our
context
or
knowledge
transfer
is
is
necessary,
but
other
than
that
last
I
discussed
this
at
one
of
the
working
group
meetings.
There
didn't
seem
to
be
a
reasonable
alternative
and
like
nothing
that
is
core
to
the
project,
and
so
it's
like
a
pac-specific
solution.
A
A
That
makes
sense,
then
there
is
so
not
so
much
discussion
needed
on
this.
C
C
Don't
think
so,
but
it's
my
take.
A
It's
gonna
be
a
probably
larger
one,
though
place
to
work
through
everything.
I
don't
know
how
much
longer
what
else
is
on
the
agenda:
build
pack,
ci
and
terraform
cloud?
How
about
we
go
through
those
and
then,
if
we
have
time
people
ought
to
go
through
that
we
can
go
through
more
discussion
issues.
C
Sounds
good
yeah.
I
could
share
my
screen
if
that's
okay,
so
this
is
more
or
less
a
an
announcement
of
and
honestly
gathering
feedback
at
the
same
time,
because
I
do
know
that
a
lot
of
this
started
as
kind
of
an
experimentation
and
now
it
just
kind
of
has
become
a
little
bit
more
solidified.
C
So
in
the
past
we've
had
you
know,
essentially
what
is
it
called?
C
Github
runners
and
we've
been
using
equinix
metal,
which
previously
was
called
packet,
and
these
are
all
things
that
are
provided
by
the
cncf
project
right
and
we've
been
maintaining
them
to
summary
car
some
regard
manually,
and
so
one
of
the
things
that
we
were
playing
with
is
anthony
from
the
windows
team
had
created
some
terraform
scripts
for
a
windows
lcal
and
right
now,
in
a
lot
of
the
work
that
I'm
doing,
I
needed
a
different,
an
additional
runner,
sorry
for
red
hat
enterprise
linux
with
openshift,
so
I
could
test
some
of
the
techton
related
stuff
and
set
up
ci
that
way.
C
So
in
part
of
that,
I
started
playing
around
a
little
bit
more
with
terraform
and
trying
to
like
really
find
a
more
automated
or
managed
solution
versus,
like
all
the
manually
set
up
runners,
and
so
with
that
one
of
the
main
problems
with
terraform
is
being
able
to
share
the
state
of
the
machines
and
looking
at
it
was
a
terraform
cloud
which
is
essentially
free
for,
I
think
up
to
five
users,
but
the
way
that
I've
set
it
up
right
now
is
one
main
user
which
we
would
like.
C
Everybody
that
has
access
to
the
infrastructure
would
have
access
to
that
account.
And
so
far,
if
you
look
at
how
this
is
set
up
from
a
usage
perspective,
there's
a
script
to
authenticate
with
terraform
that
uses
lastpass.
So
if
you
have
again
access
to
the
infrastructure,
you
have
access
to
lastpass,
so
you
authenticate
against
terraform
cloud.
You
do
terraform
init
and
apply,
and
then
this
is
what's
going
to
essentially
sync
our
state
across
every
individual.
C
All
the
terraform
scripts
are
pretty
much
set
up
for
windows,
lcal
and
red
hat
enterprise.
Linux
openshift
and
one
of
the
things
that
I
could
foresee
is
in
the
future,
adding
more,
I
think,
adding
wcal,
which
we
have
and
right
now
still
manually
managed.
And,
lastly,
what
is
it?
It's
gonna
add
one
more
thing.
C
Oh
totally
lost
it
anyway,
yeah
so
again,
more
or
less
an
announcement.
I
don't
know
exactly
how
we
feel
about
it.
I'd
like
some
feedback.
If
there
is
any
ideas
or
anything
like
that,.
A
The
entire
the
contributors
all
have
access
to
the
to
the
lastpass
like
part
of
this
requires
lastpass
login,
that's
open
to
all
all
contributors
right.
C
It's
available
to
all
identified
project
contributors
right,
so
yeah.
B
D
I've
never
tried.
I
can
be
a
guinea
pig
if
you
want
me
to
sometime,
but
I
have
not
tried
nor
have
I
needed
it
yet.
C
Yeah,
so
I
guess,
oh,
that
was
the
part
that
I
was
missing
so
right
now.
This
is
integrated
with
pac
and
tekton
integration,
repo
right.
So
it's
like
per
repo.
I
don't
know
what
the
life
cycle
is
using
or
the
implementations
team
is
using,
but
I
would
think
that
at
some
point
we
might
want
to
convene
in
using
the
same
methods
for
any
custom
runners.
C
C
A
A
I
know
you
said
you
also.
Is
there
a
similar
setup
to
start,
let's
say
gke?
If
that's
used
for
the
tectonic
testing.
C
Yeah,
so
on
this
other
side,
we
do
have
a
gke
setup
for
kate's.
A
Okay,
it's
great
thank
you
for
making
the
change.
This
definitely
worked.
Yeah
definitely
worked
well.
A
Yeah
I
logged
in,
I
was
able
to
get
the
I
was
able
to
get
the
information,
though
I
had
to
make
some
changes
in
order
to
get
el
cal
working.
This
ran
like
the
docker
by
you
have
to
you,
have
to
manually
share,
drives
and
windows
with
docker
in
order
for
the
test
to
work,
and
otherwise
I
had
to
like
click
on
individual
directories,
so
it
was
always
failing,
because
when
we
make
test,
directories
they're
always
random,
so
I
had
to
just
kind
of
share
a
certain
path
with
it.
A
C
Oh
yeah
yeah
that
it
okay,
so
here's
a
just
a
statement
for
for
docker
but
automating
docker
desktop,
is
painful
to
say
the
least,
and
so
like
that
and
then
being
able
to
switch
between
linux
containers
and
windows.
Containers
like
there's
yeah.
It's
just
not
ideal.
C
D
C
D
A
Okay,
I
don't
know
if
we
can
for
the
next
five
minutes,
go
over
some
of
the
other
issues
or
if
we
can
just
call
it
now
and
just
you
know,
keep
an.
D
A
D
A
As
as
the
week
goes
forward,
what
do
people
want
to
do.
A
Makes
sense
I
think
anyways
I
need
to.
I
haven't-
pruned
the
github
issues
for
quite
a
while.
They
could
probably
use
some
cleanup,
so
I'm
planning
on
doing
that
at
some
point
soon
to
see
what
we've
already
fixed
etc.