►
From YouTube: Platform Sync: 2020-08-19
Description
* KubeCon updates
* Status Updates
* Release Planning
* pack release candidates
* Docs milestones for pack releases
* Pack plugin RFC
A
A
A
A
A
All
righty
see
I
put
a
couple
things
in
the
agenda.
We
could
start
running
through
those
feel
free
to
add
more
things
to
the
agenda.
If
you
want
something
or
you
think
there
there's
something
that
we
should
discuss,
the
first
thing
is
kubecon
updates.
A
A
Yeah
the
booth's
been
quiet.
One
of
the
things
I
saw
was
slack,
especially
when
ben
and
tr
ben
and
terence
gave
their
presentation.
There
were
a
lot
of
activity
in
the
slack
associated
to
that,
so
that
was
that
was
pretty
cool
to
see.
B
A
Yeah,
but
it
wasn't
just
their
talk
like
I,
I
attended
a
couple
talks
and
just
the
overall
video
quality
was
subpar
that
made
any
term
demos
like
very
hard
to
comprehend.
C
This
was
the
case
for
the
cloud
foundry
summit
recently
as
well.
Any
screens
that
went
up
were
not
not
happening.
A
Cool,
I
did
post
something
I
don't
know
exactly
if
it'll
be
related
to
platform
or
honestly,
like
I
think
the
there
was
a
question.
I
brought
up
in
governance
about
what
was
it:
the
build
packs
registry
and
the
potential
of
another
sub
team,
which
would
be
like
the
distribution
sub
team
that
takes
ownership
of
that.
But
there
was
a
star
gs,
which
is
basically
tar
gonzip
with
a
seekable
tar
concept,
format
which
was
really
cool.
It's
like
lazy
loading
images,
and
I
think
that
would
be
like
from
a
performance
aspect.
A
I
think
that
would
be
awesome
to
at
least
investigate
to
see
whether
or
not
that
value
would
be
there,
especially
for
for
things
where
we
only
use
a
small
bit
of
you
know
run
time
dependencies
for
like
the
build
process.
We
don't
really
care
to
download
all
the
build
packs
in
a
builder.
If
we
could
just
run
like
download
the
detect
script
right
or
the
detect
binary,
then
we
don't
have
to
download
everything
else,
anyways
that
that
was
something
that
I'll
probably
investigate
in
the
future.
A
I
could
give
mine
on
behalf
of
the
vmware
side,
for
the
pack
release
we
did
have
a
pack
release
go
out,
haven't
seen
any
issues
reported
yet
so
crossing
our
fingers,
so
that
was
just
yesterday.
A
We
are
updating
a
lot
of
documentation
in
regards
to
that
new
release
and
making
sure
that
some
of
the
call
outs
or
changes
are
are
properly
documented.
B
Side,
I
think
joe's
pushing
forward
on
the
stack
back
stuff,
that's
kind
of
the
most
of
the
activity.
I
think
right
now,
travis,
if
you
have
anything
to
add,
but
it's
kind
of
kind
of
where
his
focus
is.
C
B
I
was
gonna
say
nothing
to
add
just
the
buildback
register
stuff,
making
good
progress
with
that
as
well.
I
could
be
all
updated,
though,.
A
B
The
one
with
the
pr
that
you're
currently
reviewing
you
and
david.
Yes,
I
wouldn't
point
anybody
to
it
yet
until
we've
tested
a
few
times
just
with
integrating
it
to
the
new
repo,
because
I've
been
testing
it
on
my
own
repo.
Prior
to
this,
my
my
thought
was
once
it's
reviewed
and
as
far
as
we
know,
everything
looks
good.
B
I
think
in
the
I
think
joe
had
some
ideas
or
thoughts
on
blog
posts,
or
you
know,
twitter,
all
the
all
the
social
medias
to
get
the
word
out
once
we're
ready
to
start
having
people
use
it.
I
know
internally
here
at
heroku
we
have,
I
believe,
the
java
build
pack
and
the
node
bill
pack,
authors
on
our
ends,
who
are
going
to
try
and
have
them
become
early
adopters.
So
we
should
have
some
pretty
quick
feedback
on
that
as
well.
B
A
All
right,
moving
on
to
release
planning,
I
think
I
kind
of
spoke
to
this
already
the
release
went
out
so
we're
good
there.
We
don't
have
anything
slated
for
six
weeks,
unpack
signed
one.
A
notable
mention,
maybe
is
the
windows
related
work,
we're
not
really
promoting
that
either.
Just
quite
yet,
as
emily
made
mention,
there
are
still
some
issues
on
the
life
cycle
that
need
to
be
addressed
before
we
really
want
to.
A
You
know,
I
guess,
put
a
lot
of
fanfare
behind
it,
so
that's
something
that
for
now
it's
still
experimental
people
could
play
with
it.
There's
a
known
issue
mentioned
in
the
release,
notes
and
then
hopefully,
by
the
you
know
in
the
next
six
weeks,
we'll
have
everything
else
addressed
and
be
able
to
really
put
some
some
fanfare
behind
it.
C
I
think
six
weeks
might
be
an
optimistically
long
period
of
time.
I
understand
that
we're
intending
to
to
have
at
least
a
little
fanfare
around
spring
one,
although
it
may
not
talk
too,
and
you
can
do
this
immediately-
it
will
at
least
be
some
some
more
eyes
on
it,
which
I
think
is
just
a
couple
of
weeks.
C
That
is
a
good
question.
I
get
this
information
second
hand
from
folks
working
on
the
windows
side,
so
it
is
just
a
couple
of
weeks
out.
That's
september,
2nd
and
3rd.
Okay,.
A
C
B
A
Okay,
moving
on
so
pac
release
and
release
candidates
bouncing
ideas
around,
I
think
one
of
the
the
big
pain
points
of
the
life
cycle
and
the
latest
release
of
pac
was
that
essentially
we
needed
a
patch
of
the
life
cycle
and
we
didn't
really
have
anything
any
way
to
test
the
life
cycle
while
pac
was
still
being
you
know
developed.
A
A
I
know
some
of
the
other
contributors
to
the
life
cycle
have
been
talking
about
potentially
trying
to
add
that
there
as
well
having
like
a
week
where
they
go,
they
provide
an
rc
that
then
pack
could
start
integrating
with
and
testing
in
that
way,
maybe
alleviate
some
of
the
neces
unnecessary
patches
afterwards.
A
So
keep
an
ear
for
that.
If
there's
no
pushback
on
that,
I
think
pack
could
start
doing
that
immediately,
essentially
on
our
next
release.
A
Cool
all
right
next
thing,
I
feel,
like
I'm
hogging
the
floor
over
here,
but
docs
so
docs
milestones,
so
in
our
last
sink
we
talked
about
moving
from
backlog
over
using
milestones
for
pack.
I
think
the
docks
have
needed
a
little
bit
more.
You
know
love,
I
guess,
and
in
providing
new
pack
releases
we
also
typically
want
to
update
the
docs
associated
to
it.
A
So
one
of
the
things
I've
started
personally
doing
is
creating
milestones
that
are
then
associated
to
pack
releases,
so
at
least
that
way
we
could
assume
you
know
aggregate
all
the
changes
necessary
for
any
given
pack
release,
and
then
I've
started
to
use
mustangs
as
well
for
things
like
kubecon.
So,
like
any
feedback
that
comes
out
of
cubecon,
we
could
aggregate
those
issues
so
that
we
could
put
a
little
bit
more
focus
on
those
as
well.
So
this
is
all
in
lieu
of
the
backlog.
A
Yeah,
I
think
david
mentioned
something
similar
like
I
think
he
from
his
perspective.
I
don't
know
if
this
is
exactly
what
you're
speaking
to,
but
he
wanted
still
like
a
single
view,
right
of
all
the
different
ongoing
milestones,
so
that
you
could
go
to
one
area
and
then
just
be
able
to
click
on
on
all
the
different
sections.
I
guess
or
areas
where
you
you
to
look
at.
Is
that
similar
to
what
you're
thinking
of.
B
I
I
have
a
to
be
honest,
a
disconnect
with
how
the
docs
and
all
that
are
managed,
but
I
was
just
wondering
in
like
my
head,
if,
like,
if
docs
first
off,
are
they
like
manually
written?
Are
they
auto
generated
and
if
so
like,
is
there
an
opportunity,
maybe
to
auto,
generate
new
documentation
around
milestone,
details
like
through
github
actions
or
anything
like
that.
A
I
see
we,
we
have
certain
things
that
are
automated.
Like
recently,
we
added
godox
to
or
sorry
I
shouldn't
say
that
we're
adding
godox
but
that's
slightly
different.
A
We've
we've
enabled
a
function
of
cobra
that
generates
markdown
files
for
all
the
pac
commands,
and
so
that
documentation
is
auto-generated
in
that
regard
and
then
published
onto
our
website.
So
there
is
that
at
least
right.
There
are
other
parts
that
are
a
little
bit
more
manual,
and
these
are
things
like
tutorials
right
having
to
update
tutorials
as
build
pack
or
platform.
Api
changes
occur,
and
I
don't
know
that
there's
a
real
way
to
automate
that,
but
I'm
sure
we
can
improve
it
long
term.
B
C
Yeah,
I
felt
bad
you
were
talking,
for
they
were
all
your
your
queries,
all
your
points,
so
the
concept
of
of
plug-ins
in
our
in
our
binaries
has
come
up
a
couple
of
times,
most
recently
joe's
rfc
around
experimental
features.
I
think
previously
zander
raised
a
similar
rfc
about
the
kind
of
experimental
flag
in
impact
which
has
been
implemented
and
plug-ins
have
been
sort
of
promoted
as
a
as
an
alternative
or
also
just
a
complimentary
system.
It
wouldn't
even
need
to
just
be
around
experimental
features.
C
It
could
just
be
tangential
stuff
that
we
actually
don't
want
in
the
core
cnb
product,
but
certain
use
cases
fit
it
pretty
well
and
yeah.
So
I've
been,
I
personally,
I've
been
doing
a
bunch
of
stuff
around
refactoring
our
acceptance
tests
to
try
and
make
them
a
little
more
inviting
to
folks
and
on
the
horizon.
As
I
come
towards
the
end
of
that
period.
C
I'm
I'm
thinking
is
this
my
next
big
thing,
and
I
guess
I'm
trying
to
get
a
feel
for
whether
somebody
else
has
that
feeling
whether
somebody
else
has
already
kind
of
started
investigating
this.
If
there's
room
for
collaboration
I'd
be
happy
to
do
that.
If
not,
then
I
guess
the
other
question
is
this
is
this
is
not
something
that
has
featured
in
any
of
our
road
maps?
C
So
it's
a
little
bit
kind
of
out
there,
potentially
and
and
not
being
discussed
at
a
broader
level,
and
the
rfc
would
be
an
opportunity
to
do
that,
but
maybe
there's
a
pre-rfc
phase
of
like
do.
We
think
this
is
cool
and
we're
not
going
to
do
it
or
do
we
think?
Actually,
this
is
worth
pursuing.
C
B
A
A
So
right
now
we
interface
with
the
docker
damen
and
with
registries
and
that's
kind
of
ultimately,
the
only
way,
but
some
of
the
requests
that
have
come
in
are
have
to
do
with
what
is
it
podman
as
one
of
them
right
and
then
that
would
eliminate
dr
daemons
and
we
can't
really
use
their
docker
socket
support
because
their
docker
socket
support
is
broken
right,
so
being
able
to
create
a
plug-in
in
which
we
actually
interface
directly
with
a
back-end
right.
A
container
back-end
would
be
very
beneficial
for
mac.
A
I've
explored
the
idea
of
doing
kind
of
like
an
alternative
to
docker
for
mac,
where,
instead,
we
would
have
our
own
container
d
and
linux
kit
running
and
we
would
start
up
our
own
and
like
manage
a
lot
of
it
internally,
so
that
it
would
be
a
lot
more
like
smaller
and
more
performant
in
certain
fashions,
and
that's
where
something
like
you
know,
star
gz
could
come
in
where
the
container
yeah
the
container
snapshotter
and
all
that
stuff
would
run
a
lot
faster
and
there's
one
product,
vmware
fusion.
A
I
think
it's
still
like
alpha
or
something
like
that
or
beta,
but
they
have
a
vctl
which
is
allows
you
to
do
docker
images
and
kubernetes
management
locally,
and
they
also
do
not
have
a
docker
socket
support
right
so
being
able
to
enable
that
back.
End
would
also
be
beneficial.
C
So
broadly,
a
kind
of
a
container
management
infrastructure
or
repo
management
infrastructure
would
be
would
be
one
of
the
things
I
could
see
room
for
that
being
like
additional
commands,
potentially
as
well
so
like.
I
think
we
don't
have
an
inspect
image
right
now.
So
if
we,
if,
if
people
wanted
to
do
inspections
in
different
ways
or
something
like
that,
different
options,
potentially
like
different
blocks
of
data
into
our
inspections,
could
be
entry
points
as
well.
C
I
guess
in
a
similar
way,
like
wouldn't
it
be
nice
when
we
were
looking
sorry
to
exclude
a
few
folks,
we
had
an
internal
demonstration
on
a
modified
version
of
dive
that
allowed
us
to
extract
specific
cmb
image
information
and,
while
that
isn't
necessarily
a
particularly
good
use
case
for
it,
wouldn't
it
be
nice
if
dive
had
this
this
plug-in
interface,
so
we
could
just
publish
a
plug-in
and
say
we
know
more
about
these
images,
so
we're
going
to
offer
these
options
instead
of
the
the
default
stuff
that
you're
going
to
list
so
that
so
I
can,
I
can
see
room
for
for
that,
possibly.
A
Yeah,
I
think
the
way
that
I
would
picture
this,
though,
is
maybe
creating
rfcs
not
so
much
for
a
holistic
plug-in
system,
but
very
small
subsections
of
plug-in
functionality
that
we
want
to
provide,
and
then
I
think
eventually
we
would
maybe
circle
back
around
to
a
central
unified
system,
but
I
would
hate
to
provide
so
much
of
an
abstract
idea
that
it
doesn't
become.
You
know
something
that
we
could
actually
implement
in
a
reasonable
time
frame.