►
From YouTube: 20140404 kubeadm breakout packaging
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
April
4th
2019.
This
is
a
breakout
session
with
respect
to
the
Covidien
project,
but
it
also
dovetails
into
release
artifacts
of
publishing
we've
loved
or
wanted
to
escape
from
this
for
a
long
time,
but
we
find
ourselves
in
a
situation
where
we
have
to
manage
maintain
the
some
of
the
packages.
So
we've
had
a
long-standing
steps
that
we've
got
merged
last
cycle.
One
was
for
the
release
process.
One
was
from
the
packaging
Marek.
B
So
actually,
there
was
a
couple
things
that
I
was
I,
don't
know
if
I
did
them
how
they
were
supposed
to
so
I
actually
made
it
like
eBay
DM,
so
I
mean
it
builds
when
you
do
I
use
build,
run
SH
make
as
the
command
to
build.
That's
what's
always
worked
for
me,
though,
builds
like
qadian
does
currently
I,
don't
know.
If
that's
what
we
want.
That's
what
I
made
so.
B
B
B
B
Checks
for
an
error
here
is
FF
p.m.
actually
builds
it
based
off
of
your
filesystem,
so
you
basically
build
your
filesystem
out
and
then
tell
it
to
build
that
package
or
we
can
actually
do
virtual
file
system,
so
we
can
like
map
things
to
each
other.
So
this
is
just
some
comments
on
how
we
could
do
that.
Currently,
I,
don't
do
that,
so
you
need
to
like
go
into
the
you
need
to
copy
the
file
in
and
set
it
up
in
user
bin
and
run
it
for
their
if
you
want
to
so.
B
A
A
A
Yeah
I
mean
like
you
need
to
like
when
you
install
some
packages,
they
have
stall
as
a
user
right
and,
and
you
have
to
specify
you-
can
specify
ownership
rights
given
areas
and
privilege.
Do
you
have
you
investigated
any
of
the
higher
level
abstractions
that
you
would
need
for
like
what,
if
I
want
to
install
dependencies
that
are
missing
what
if
I
need
to
set
privileges
on
directories?
A
B
B
B
What
you
can
do
is
do
overrides,
so
this
was
my
way
of
fixing
that
so,
basically
these
are
the
chair.
This
is
what
we're
going
to
say,
works
for
both
things,
and
the
overrides
are
things
that
are
specific
to
rpm
or
you
know
the
depend
you
know.
Maybe
this
needs
to
pend
on
its
name,
something
different.
Okay,.
A
A
File,
well,
that's
what
we
do
today,
which
is
not
terrible,
but
we
want
to
have
one
way
of
basically
doing
a
patched
to
patch
the
current
structure
and
then
output.
The
artifacts
I
feel
like
if
we
just
took
the
released
repository
smashed
it
over
into
the
directory
structure,
but
we
did
an
abstraction
layer,
for
these
are
the
parameters
that
you
override
in,
like
you
have
a
in
file
and
then
you,
you
know
you
you're
in
file
basically
does
the
overrides.
A
A
So
I
think
the
biggest
thing
that
we
need
to
do
from
requirements
perspective
is
to
do
a
variable
substitution
override
in
having
a
configuration
file.
That,
basically,
is
the
single
input
file.
That
then
does
it
for
both,
even
if
they're
just
stuffed
out
templates
for
the
most
part,
that
would
be
sufficient.
So
just.
C
A
Possibly
like
a
two-step
process,
either
with
fbm
we're
using
these
estate
packages
right.
So
if
you
had
the
existing
package
with
you
with
it
with
an
override
substitution,
that's
what
we
currently
need-
I,
don't
one
thing
I'm
trying
to
figure
out
is
the
cost
benefit
analysis
of
doing
fpm
this
way,
but
whether
or
not
we're
better
off
just
using
the
packages
of
making
canonical
specs
and
canonical
tab
based
installs
I'm.
A
A
A
A
Have
never
seen
what
we're
doing
before
so
that
that
we
need.
That
is
fine.
We
can
do
that.
I
think
that
the
question
I'm,
having
is
the
maintenance
burden
of
doing
fpm
in
this
manner
versus
just
having
like
a
specter
in
or
just
doing
some
variable
substitution
that
we
could
define
the
variables
and
just
substitute
them
into
the
either
the
spec
or
the
directory
structure
for
Debian
on
the
build.
So
that
way
a
person
can
customize
easily
with
either
a
go
file
or
script
on
SH
or
whatever
you
want
to
do
with
it.
B
A
A
A
A
A
This
already
actually
exists,
but
it's
not
we
used
for
publishing,
but
what
exists
currently
is
a
little
bit
more
messed
up.
It's
not
very
clean.
It's
got
some
Basile
isms
in
there
and
what
I'd
like
to
do
is
take
the
Baz
loves.
Basil.
Isms
out
is
sort
of
abstract
the
Basile
isms
into
a
single
thing.
That
could
be
the
overrides
okay,
because
I
think
I
don't
want
to
rely
too
heavily
on
basil.
I
would
suggest,
like
the
directory
structure
and
be
able
to
say
like
make
a
package
and
I.
D
The
rent
now
will
be
more
in
favor
of
option
a
and
possibly
just
because
we
can
you
know
originally
when
fpm
was
like
that
it
was.
It
was
not.
The
first
candidate
was
actually
NF
BM,
which
is
written,
go
and
it's
pure
go,
and
you
know
the
the
features
missing,
we're
just
a
few,
and
ideally,
if
we
we
had
to
adopt
at
some
point,
if
n
F
p.m.
B
B
B
A
B
B
A
Because,
like
if
we're
going
to
create
a
canonical
spec,
we
want
to
own
and
maintain
it
as
well
as
a
canonical
dead
file
because
distributions
the
problem
with
generic
generators
is
that
they
never
take
into
account
the
distributions
as
the
change
over
time.
There's
always
some
lagging
inefficiency.
There.
B
C
I
can
add
that
I
have
seen
projects
doing
what
kubernetes
does
I
also
have
not
been
a
fan
of
it,
because
I
find
it
extremely
fragile
from
the
maintenance
perspective.
Unless
you
have
a
very
strong
owner
who's
constantly
curating.
Well
it
it
just.
It
falls
apart.
You
end
up
exactly
where
ours
are.
I
also
agree
that,
having
things
overly
overly
generated
becomes
a
difficulty
point
for
maintenance,
a
flat
spec
file,
especially
if
you
look
at
what
our
spec
files
do
it
are
intending
to
do
today.
C
It's
almost
nothing,
it's
very
simple,
there's
a
straight
flat,
logical
flow!
If
you,
if
you
look
at
the
what
would
be
the
generated
spec
file
and
almost
anybody
who's
seen,
a
spec
file
could
walk
in
and
look
at
that
and
say:
oh
I,
get
it
I
know
what's
going
on,
so
that
that
means
you
have
a
chance
of
having
maintained,
ership
a
better
chance,
broader
chance,
broader
set
of
potential
contributors,
the
more
it
gets
abstracted
away.
C
The
articulating
in
an
abstracted
way
your
dependencies
across
distributions
and
operating
systems,
the
you
you
end
up
with
very
complex
files
right
now.
We
only
have
a
couple
things
that
we
express
and
ours
are
already
a
mess
and
over
time,
if
the
project
drifts
towards
something
more
like
a
distribution,
these
packages
are
gonna,
have
a
distribution
level
feel
to
them
and
they'll
be
potentially
quite
complicated
and
what
they
try
to
articulate
so
having
something
closer
to
a
straight
description
in
the
native
packaging
form
for
the
debian
or
RPM
based
distributions.
C
I
feel
like
that's
much
more
likely
to
be
maintainable
over
the
long
run,
a
question.
Maybe
that
came
up
as
you
started,
writing
Tim
just
the
first
set.
What
are
the
things
specifically
that
we
imagine
would
be
overridden?
Is
it
the
beiong
going
from
the
abstracted
IP
tables
to
the
district,
specific
IP
tables,
name
plus
virgin,
or
what
types
of
things
would
be
over
in?
There
am
I
missing
something
maybe
beyond
just
dependencies.
You
mean.
D
A
D
B
Thing
I
was
gonna,
say
one
thing
I
know
for
Sousa
is
that
we
like
to
package
our
own
stuff
with
our
own
change
logs.
If
we
make
some
contributions
to
it,
so
we
don't
provide
the
change
log
with
the
stuff
that
we
added
to
it
as
well
as
our
dependencies
aren't
always
the
same
as
as
other
people
that
use
RPM.
A
D
A
D
I'm
thinking
I
had
a
you
know.
A
long
time
ago,
I've
had
a
similar
experience
with
docker
when
creating
packages,
rpms
and
so
on,
for
from
photon
and
in
order
to
create
official
photon
packages
for
photon
I
had
to
go
through.
You
know
various
hoops
to
make
sure
that
it
was
reported,
because
you
know
we
had
some
interesting
challenges,
like
you
know,
different
package
names
and
so
on,
even
though
we're
using
narky,
amps
and
I
took
it
through,
you
know
several
hoops
and
the
build
process
at
the
time
was
not
very
straightforward
and
streamlined.
D
Separating
build
rules
per
distribution
was
a
good
idea
at
the
time.
So
you
could
have
you
know
you
could
have
support,
you
could
add
support
for
new
distributions
and
you
could
just
simply
add
you
know
news
pack
or
new
over
I'd
call
it
whatever
you
like
to
support.
You
know
any
new
decision
that
would
come
on
in
the
future,
with
different
requirements
like
changing
package
names
and
so
on.
So
I
don't
know
if
this
makes
sense
it
would
be.
You
know
something
we
should
do
at
the
same
time
as
in
standardizing
a
set
of
distribution.
A
F
Okay
can
I
give
an
example,
so,
some
time
ago,
I
participated
in
a
project
that
it
had
a
fairly
big
contributor
base
and
but
at
some
point
we
got
so
frustrated
with
the
packaging
that
we
stopped
doing
it
ourselves.
And
what
happened
is
that
the
distros
started
approaching
us
with
people
and
resources
to
maintain
the
packages
for
us.
So
at
the
point
when
we
stopped
the
community
triggers
basically
volunteers
to
do
the
work.
G
A
F
Also
in
the
particular
project,
I'm
talking
about
the
distro,
so
at
some
point
became
coastal
and
started
attacking
the
developers
from
mailing
lists,
because
we
were
not
following
like
the
the
process
of
code,
the
correct
process
of
creating
specs
and
stuff,
and
we
we,
we
entered
some
flame
wars
at
that
point.
I
realized
that
the
whole
packaging
is
broken
on
a
very
fundamental
level
or
limit,
but
yeah
if
we
have
to
maintain
it
we
have
to.
But
the
question
is
how
to
do
it
in
the
most
maintainable
way.
I.
B
Mean
I'm
kind
of:
can
we
maintain
it
in
such
a
way
that
we
maintain
it
for
the
distribution?
So
we
make
it
really
easy
for
distributions,
but
don't
actually
maintain
official
ones.
I.
Think
honestly,
I,
like
timothy's
in
game,
where
we
have
a
image
installer
container,
that
handles
with
all
those
gross
nets
for
people,
but.
A
A
C
B
A
Think
if
we
step
out
and
make
the
if,
if
we
start
to
step
out
this
in
the
main
repository
and
have
the
canonical
versions
of
the
packaging
there
and
do
public
broadcasts
to
different
distributions
like
I,
know,
Debbie
a
maintainer
and
I
I
am
a
packager
for
fedora.
Still,
whether
or
not
I
admit
that
I
can
I
can
write,
canonical
specs
right,
I,
don't
particularly
like
doing
that,
but
that
I
can
do
it.
A
So
we
can
get
a
canonical
spec
in
place
that
that
matches
what
I
PR
two
years
ago,
along
with
the
the
getting
a
debian
structure,
that's
more
like
canonical
packages
and
then
have
the
overrides
available
for
distributions
and
see
if
that's
good
enough
on
the
branching
strategy.
So
like
every
version
of
kubernetes,
has
its
own
branch
we'd
have
to
merge
some
of
this
stuff
backwards
to
previous
branches.
A
B
Well,
I
did
just
want
to
go
back
to
something
that
Tim
pepper,
said.
I
do
think
that
we
should
keep
it
simple
for
maintainer,
I,
I'm,
very
new
to
the
project
and
new
to
go,
and
so,
while
I've
been
learning
it
the
simpler,
it
is
the
easier
because
it's
not
custom
you
can
find
stuff
out
on
the
Internet,
but
I
do
think
that
that's
a
really
good
point!
Sorry
I
know:
I
just
took
this
backwards
in
time
like
that.
A
C
The
new
thing
is
as
good
or
better
than
the
prior
thing.
One
option
would
be
over
the
115
cycle,
presuming
that
the
workgroup
gates
and
for
folks
are
moving
forward
on
that
front,
that
we
focus
on
standing
up
a
flow
that
populates
that
new
infrastructure-
and
we
demonstrate
the
it
in
a
clean
way-
is
better
than
the
old
way
is
right
now
doing
a
comparison
with
the
old
way
is
fraught
I.
C
C
Be
that
sooner
than
that,
if
we
show
that
the
the
new
non-google,
workflow
and
and
hosting
is
is
good
that
we
could
get
to
a
point
sooner,
that
we
just
turned
the
old
stuff
off,
which
would
imply
having
back
ported.
This
then
demonstrate
it,
but
then
that
be
the
the
proof
point
for
people
saying:
okay,
this
is
okay
for
us
to
back
port
to
the
other
branches.
We.
A
C
Yeah
I
think
if
this
is
mostly
going
to
be
isolated
to
the
build
directory
and
in
terms
of
what
the
current
built
flow
is
for
the
what
I
call
sort
of
the
existing
Google
workflow
for
build
publication
that
doesn't
use
kubernetes
build
at
all.
So
there's
not
going
to
be
existing
patches
on
that
in
in
Prior
branches.
C
A
C
And
that's
where
I
feel
like
it
would
be
a
whole
lot
easier
to
just
say
we're
doing
it
in
the
new
flow
have
set
up
a
preferred
layout
with
something
for
testing
repose
as
well
in
the
the
new
infra,
because
the
the
existing
stuff
is
on
the
backs
of
individuals
like
I
and
unless
I
change
jobs.
Work
for
Google
I'm,
never
actually
going
to
see
the
scripting
that
they
use
to
do
the
actual,
build
publish,
step.
A
That
that
parts,
the
publishing
praticed,
are
the
trickier
bit
to
make
sure
that
the
release
team
is
not
in
some
weird
purgatory
for
for
115
I
would
hate
for
us
to
like
have
115
artifacts
generated
one
way
and
all
the
previous
artifacts
generated
in
a
different
way,
and
then
try
to
maintain
that
for
all
the
CDs
that
come
out
for
the
next
six
to
nine
months.
Worst.
C
Case
I
see
that
as
a
forcing
function
to
say,
hey
look
at
this.
We've
got
community
support
and
it's
easier
on
the
current
stuff.
Let's
finally
make
the
old
stuff
that
way
if
there
was
resistance,
I
think
because
it's
unknown
or
sorry
unknown
defectively
at
this
point
and
on
the
backs
of
a
few
individuals
at
Google,
I,
think
there's
gonna
be
people
who
are
receptive
to
making
a
change.
If
we
can
demonstrate
that
what's
there
is
okay.
C
Been
asking
for
about
a
year
and
it's
the
closest
I've
gotten
is
to
hearing
it
referred
to
as
the
rapture
process
and
I'm
guessing.
Even
that
word
is
somehow
confidential
and
isn't
supposed
to
exist
outside
there.
Also
I
I've
asked
around
in
circles
in
the
only
way
I
get
information
is
when
somebody's
accidentally
mentioned
something
or
or
sent
out
a
link
to
a
file
like.
Oh,
it's
all
documented
here
and
oh
wait.
No,
you
can't
actually
view
that
because
you're,
not
a
Google,
employee
so
and
and
I've
asked
like,
could
this
be?
C
F
C
F
C
Yeah,
it's
it's
getting
close.
We
actually
talked
about
packaging
and
repos
yesterday
in
the
meeting.
So
it's
it's
seeming
like
it's
close,
but
still
most
of
the
talk
in
the
meetings
is
around
things
like
making
sure
the
DNS
is
working
right
like
they're
there,
but
there's
also
20
different
things
that
they're
working
on
getting
lined
up
to
make
equivalent
hosting
for
all
the
things
that
are
hosted
today,
so
at
I've
felt
since
the
new
year
that
it's
within
weeks
or
a
month
from
starting
to
start
shifting
into
being
on.
C
A
Okay,
I
need
to
sync
with
them
to
figure
out
where
they're
at.
If
we
need
to.
If
there's
you
know,
if
we
reach
your
Kobayashi
Maru,
we
should
we
should
plan
for
that
endgame,
because
I
don't
want
to
be
reliant
on
people
that
don't
exist
right
if
we're
gonna
make
these
changes,
I
want
to
make
sure
much
like
Kirk,
I'm,
gonna
chief,
the
process
I,
don't
believe
in
the
no-win
scenario.
A
C
Yeah
I'd
love
to
hear
what
people
think
about
that.
Maybe
this
is
what
it
takes
to
light
a
fire
under
the
state's
afraid,
cuz
I
mean
it's
been.
The
other
thing
like
Google
made
their
there
9
million
dollar
announcement
in
August,
where
it's
been
nine
months
and
we
we
still
don't
have
infrared
and
we
it's
moving
very
slowly.
C
Would
take
nine
million
dollars
and
give
the
same
output
that
we
have
that's
a
little
harsh
but
I
mean
like
we
don't
have
something
working
today.
After
all
this
time,
I
know
people
have
been
working
on
it,
but
at
some
point,
like
you
say,
we
gotta
have
the
ending.
We
got
to
actually
have
it
running.
A
C
Us
as
a
community
together
like
that
we
can,
if
we
have
requirements,
we
want
to
make
a
change
that
we
can
articulate,
agree
them
and
effect
that
change
from
the
weather.
It's
a
package
template
or
the
thing
that
generates
that
and
then
that
generates
the
packages
publishes
the
packages,
multiple
streams,
whatever
like
we
would
have
the
ability
to
do
all
of
that
without
being
dependent
on
a
googler
to
make
some
change.
Some
are
in
a
system
that
we
don't
see
or
understand
so.
F
The
the
released
in
engineering
project
I'm
not
sure
how
this
is
going
currently,
but
I
see
a
problem
in
the
overall
release
process
needed
in
the
maintenance
of
what
our
releases.
The
problem
is
that
we
have
a
temporary
released
him
and
the
seek
release
leadership.
But
then
the
release
team
is
disbanded
and
we
don't
have
any
engineers
persistently
fixing
release
problems.
C
C
A
A
A
A
F
C
F
So
we
still
have
the
problem
the
users
facing
breakages,
so
we
I
wanted
us.
To
sum
count
decides
I,
don't
know.
If
this
meeting
is
the
forum
to
decide
how
to
fix
the
problem
immediately
like
patch,
it
quote,
unquote
patch
it
in
a
way
and
maybe
think
of
long-term
solutions
later,
because
otherwise,
like
the
people,
are
continuing
to
create
issues
and
we
need
a
solution.
F
A
A
Concern
yeah,
they
make
a
change.
No
one
verifies
anything
across
all
of
the
versions,
because
the
problem
here
with
the
change
is
that
it
would
affect
everything
and
there
would
be
a
patch
released
for
all
versions
versus
forcing
people
to
go.
Update,
I,
wouldn't
I
would
prefer
just
the
statement
of
record.
This
is
the
whoops
oops.
We
made
a
mistake,
we're
trying
to
fix
it
actively
in
the
interim.
Please
upgrade
to
the
latest
patch
version
for
your
release.
G
A
C
I
sent
a
message
yesterday
to
a
cig
release
since
a
cluster
lifecycles
soliciting
opinions
on
the
options
here,
because
I
don't
think
it's
right
for
me
to
declare
it
cuz
I
think
my
my
preferred
path,
I
think
differs
probably
from
what
Tim
Sinclair
would
do
in
this
position.
Our
Lumira
as
well.
So,
like
the
I,
don't
think
it's
right
for
me
to
say
like
this
is
what
I'm
doing
to
fix
the
problem.
As
a
chair
of
sig
release,.
A
Well,
if
we
had
an
errata
like
that,
we
canonically
point
people
to
we
just
feel
like
oops,
we
made
a
mistake.
It's
gonna
be
fixed
in
releases.
So
if
you
have
no
reversion
the
new
and
workaround
just
to
do
XYZ,
that's
a
typical
standard
practice
for
release
engineering
right
like
we
did
it
all
the
time
past
I
guess.
C
A
C
That's
the
big
choice
to
make
that
declaration.
I
want
that
declaration,
but
I,
don't
know
that
it's
my
call
to
make
that
Samara
Lee
for
the
project,
especially
because
users
have
the
perception
today
that
this
has
worked,
that
this
is
changing,
something
that
we're
regressing
there
against
their
expectations
or
belief
in
support.
F
A
C
A
C
The
book
today
that
it's
it
were
hard
bound
against
old
CNI.
It
sees
the
new
CNI
package
and
says:
oh
I,
don't
know
what
to
do,
even
though
it's
hard
bound
it.
Everything
should
resolve
that.
All
I
can
say
is
that
both
for
the
Deb's
and
the
rpms,
the
dependency
resolver
on
the
client
side,
it
installed.
A
F
So
rdrd
I
think
explain
it
and
also
Jason
I.
Think
the
the
reason
for
this.
So
we
are
basically
saying
equals
zero
six
zero,
but
the
depths
are
not
respecting
this
role
and
basically,
what
Andy
and
Jesus
said
is
that,
probably
because
of
the
major
of
the
CNI
package
is
zero,
there's
some
some
logic
inside
the
package:
manageable
stuff.
That
is
not
respecting
the
eco
sign
and
it's
trying
to
get
the
zero
seven
five.
As
that.
C
Equals
does
not
work.
Yes,
our
old
packages
say
they
want
old
dependencies.
Those
are
all
present
in
the
repository,
but
newer
packages
also
happen
to
be
present.
On
the
surface.
That
shouldn't
be
an
issue.
The
old
packages
should
pull
in
their
old
package
dependencies
they're
all
present
there
self-consistent
cool,
but
somehow
it
notices
the
in
the
repository
there's,
a
newer
version
of
the
dependency
happens
to
have
a
leading
zero
on
the
version
number
I
I
haven't
run
into
this
before
and
looking
all
across.
C
Other
distros
I
see
that
that
case
exists
and
I'm
sure
it
has
in
lots
of
cases
forever,
but
for
some
reason
that
breaks
things,
the
only
difference
I
see
is
when
you
act
like
a
distribution
instead
of
just
a
project
populating
a
directory
with
all
of
your
releases
is
a
distribution
your
repository
contains.
The
latest
sets
one
of
each
package.
A
One
of
the
things
that
we
should
strive
to
do
too
is
when
I
was
looking
at
the
repository.
It
was
not
curated,
but
we
didn't
remove
things
that
were
on
a
date.
We
had
CNI,
which
was
super
old.
We
had
a
whole
bunch
of
stuff
that
was
super
old,
insider
customer
and
I
think
we
should
probably
remove
that
stuff.
If
you
even
look
at
my
standard
docker
repositories,
they
do
not
have
things
in
there
that
are
no
longer
supported
by
their
support
matrix.
So
we
should
remove
those
things.
Yep.
G
C
That's
the
difference
that
I
don't
know
that
I've
not
seen
a
pro
I'm,
trying
to
think
of
an
example
of
a
project
that
puts
it
stuff
in
a
repository
like
this
and
has
all
of
the
stuff
there
like
when
you
put
up
a
repository,
you're
you're,
acting
like
a
distribution.
What
that
repository
should
contain
is
the
current
set
of
what's
preferred
by
the
distribution
that
curation
action
needs
to
happen.
I
think
the
package
manager
then
based
on
the
behavior
here
the
package
manager
supposes
that
that
is
the
case.
Yeah.
A
C
A
C
Yeah,
so
that
presumes
there
that
the
version
level
is
that
the
distro
is
versioning.
So
what
we
would
to
do
something
similar
to
that
in
this
case
we
would
and
there's
still
only
one.
There
would
be
one
vim
for
fedora
29,
one
for
fedora
28.
We
would
need
repos
that
are
kubernetes
114,
kubernetes,
113
and
we'd
have
one
instance
of
cubelet
cube
a
DM
cube
kettle
repositories,
I.
B
A
So
I
think
we're
I
think
the
answer
of
B
is
is
what
we
should
be
aiming
for,
and
we
can.
We
can
bite
shed
on
the
there.
Were
these
semantics
for
the
release
repo,
but
I
think
we
should
start
that
should
be
owned
by
the
Kade
sit
for
group
and
the
release
team
to
manage
that
stuff
and
come
up
with
concrete
use
cases
I
think
the
packaging
stuff
we
can.
B
A
A
I
don't
think
this
is
we
can,
let's,
let's
just
start
to
work
on
that
action
items
and
then
update
the
cap
as
appropriate,
because
it's
still
provisional
and
you
know
if
the
exchange,
so
let's,
let's
get
a
POC
in
place,
modify
the
existing
versions.
We
can
start
with
our
PMS
because
they're
not
actually
used
in
CI,
so
I
can
get
all
the
pieces
in
place
and
then
switch
to
do
a
second.