►
From YouTube: 20190503 node lifecycle workstream
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hopefully,
folks
didn't
see
my
screen
they're
filtering
in
here.
So
so
my
my
objective
for
today
was
to
talk
about
the
beginnings
of
the
two
proposals.
The
first
one
was
the
image
Stamper,
mint
or
builder,
whatever
you
want
to
call
it,
and
the
second
one
was
to
talk
about
no
lifecycle,
hooks
I.
Think
no
lifecycle
hooks
to
me
is
very
simple:
it's
basically
defining
a
state
machine
and
we
can
probably
do
that
completely
asynchronously
on
the
dock.
A
A
What
does
a
mutable
infrastructure
mean
for
a
UI,
provide
a
UI,
our
provider
interface?
Well,
an
update,
be
in
action.
I.
Imagine
that
from
a
immutable
infrastructure
perspective
that
it
would
be
an
asynchronous
action
where
you
trigger.
You
declare
a
state
for
an
update
and
it
would
be
a
delete
create,
but
there
could
be
a
bunch
of
other
behind-the-scenes
action
going
on
like
you
could
pre-staged
things
before
you
actually
did
the
action.
A
So
you
know
a
person
wants
to
do
things.
Another
was
a
separate
conversation.
I
had
with
some
other
folks.
That
says
just
because
cluster
API
is
immutable,
doesn't
mean
that
you
can't
have
something
like
a
comedian
operator
that
you
that
other
people
could
use
or
design
that
can
do
in
place
style
updates.
B
As
a
one
question
is
just
like
what
is
like
saying
that
it's
immutable,
does
that
mean?
Does
that
like
what
makes
the
providers
implement
it
in
an
immutable
way,
and
how
can
the
users
tell
that
it
is
or
isn't
immutable
again?
Will
it
be
slower
for
users,
because,
like
deleting
and
creating
new
VMs
is
much
slower
than
just
like
updating
a
VM
in
place?
Yeah.
A
I
agree
so
I
think
I
talked
with
Rupa
about
this
particular
one
and
yes,
you
are
correct.
There
is
no
strong
guarantees
across
providers.
We
could
add
a
set
of
conformance
tests
that
verifies
behavior
of
this
to
be
completely
unusable.
If
you
wanted
to
over
time
and
I,
think
that's
probably
a
reasonable
expectation
for
a
conformance
test,
but
it's
it's
a
there
aren't
strong
guarantees,
I
think
as
we
start
to
build
some
of
the
macro
level
behavior
into
cluster
API
I
think
we
can
get
better
adherence
to
it.
A
So
currently,
the
way
the
behavior
is
done
is
it's
better
in
it
and
we
don't
have
sort
of
like
a
control
flow
for
a
common
behavior
in
the
core,
but
I
think
as
we
push
more
of
that
into
the
core.
It
just
basically
calls
out
to
hooks
to
the
providers
to
make
it
much
more
systematic
in
the
flow
so
that
an
individual
delete
is
a
delete.
You
know
it's
part
of
an
update.
A
delete
is
one
part.
Then
there's
the
creator.
B
A
Now
they
could
always
override
if
they
wanted
to
to,
but
as
I
mentioned
before,
there's
nothing
that
prevents
if
a
person
wants
to
do
things
in
a
mutable
fashion,
there's
nothing
to
say
that
they
could
do.
It
can't
do
this
with
something
an
operator
pattern
on
the
existing
cluster
using
using
a
comedian
style
update
very
analogous
to
how
some
core
OS
updaters
used
to
work
or
do
work
today.
A
Next,
up
before
we
get
into
the
proposal
stuff,
which
workgroup
is
working
on
software
provisioning
across
all
the
Kathy
providers,
that's
what
we're
doing
that's
part
of
this,
the
software
provisioning
or
the
stamping
or
the
minting
or
the
tool
that
that
basically
does
the
installation.
It
could
be
two
parts
right.
C
Yeah
I
think
well,
I
think
what
I'm
asking
is
the
the
provisioning
not
so
yeah.
Maybe
maybe
I
misunderstood
the
like
the
two
parts,
but
the
first
part
is,
or
you
know,
all
everything
that
that
sort
of
a
cubelet
or
control
plane
needs
right
and
then
and
then
the
second
part
is
the
rest
of
it
right,
so
that
about
that
second
part,
III
and
I
feel
like
we.
Maybe
we
haven't
gotten
there,
yet
we
haven't
talked
about
it.
But
for
that
second
part
it
seems
like
we're
going
to
we're.
C
Gonna
have
to
come
up
with
some
data
model
and
I,
don't
know
if
we
need
to
talk
to
the
data
model,
work
group
or
or
how
that's
gonna
work
so
sort
of
my
data
model
I
mean
something
like,
for
example,
the
cube
ATM
types
right
where
it's
there
is
like
the
cube,
idiom
types
capture,
sort
of
all
the
different
configuration
that
the
you
know,
knobs
and
and
and
widgets
and
so
forth.
It
that
you
might
need
to
pass
to
you
know
coup,
BM
or
or
really
you
know,
maybe
maybe
a
lot
of
different.
C
D
C
It's
fine
to
have
extension
points,
but
if
our
goal
is
to
have
a
common
way
of
so,
for
example,
for
argument's
sake,
right,
let's
just
say
that
we
we
say:
okay,
you
know
the
sort
of
the
batteries
included
way
of
deploying
will
be
to
use
cout
video,
and
that
will
be
that
will
be
you'll,
be
able
to
use
that
with
any
capi
provider
right
any
Cathy
provider.
If
that's
the
case,
then
I
think
you,
you
know
with
the
requirement.
C
One
of
the
requirements
there
is
is
to
introduce
some
data
mall
that
is
common
to
all
providers
to
you
know
to
pass
in
the
configuration
I
said:
I,
suppose
you
know
what
I
guess.
One
way
of
doing
that
would
be
to
say
here.
This
is
a
common
way,
but
it's
still
like
a
batteries
included,
but
an
extension
is
that
is
that
sort
of
what
were.
D
I
see
yeah
I
get
I
get
your
point,
which
is
if
we
could,
if
we
could
divine
some
commonalities
between
even
these
extension
points,
then
it
might
be
useful
to
have
a
declarative
thing,
but
I
don't
think
we
know
what
that
looks
like
yet
so
we
might
as
well
just
make
them
our
PCs
for
alpha
2.
That's
my.
A
That's
cut
out
of
our
scope.
To
be
honest,
I
we
did
that's
like
the
runtime
configure
versus
the
sort
of
build
time,
because
we
kind
of
separated
out
there's
runtime
behavior
for
another
lifecycle
and
there's
no
time
behavior
and
that's
kind
of
the
post,
boot
runtime
behavior.
That's
part
of
the
data
model
and
I
know
that
Andy
and
Vince
and
Jason
and
folks
have
already
been
thinking
about
this
in
detail.
A
D
D
D
C
Mean
I
I,
you
know
to
me
to
me
the
example
is,
basically
you
know
everything
that
that
most
of
the
things
that
are
in
the
cuvette
DM
types,
but
you
know
if,
if
that's
out
of
scope
for
you
know,
for
what
we're
doing.
That's
fine
as
long
as
it's
an
install
for
some
other
worker,
because
I
know
that
that
they're,
like
we
have
a,
we
have
a
goal
of
you
know
letting
letting
some
user
have
it.
You
know
a
single.
You
know
upstream
way
of
provisioning
software
that
can
be
used
with
any
provider.
B
C
A
E
E
A
Because
I
think,
if
there'd
be
too
much
overlap
right,
because
you're
gonna
have
to
plumb
it
through
and
potentially
in
multiple
locations,
which
would
affect
the
models
so
I'm,
gonna,
I'm,
gonna,
defer
I,
think
there's
requirements
that
we
can
put
on
them,
which
are
totally
reasonable,
like
yeah
but
they're.
Already
thinking
about
what
that
use
case
that
you
mentioned
Daniel.
So,
okay.
F
A
A
But
with
regards
to
the
actual
implementation,
it's
a
matter
of
how
the
data
gets
there
right
and
what
data
it
is
and
it
provided
that
we
have
the
extension
mechanism,
so
I'm,
ok
with
it
I
think
in
reality,
all
these
proposals
need
to
come
back
together
right.
There's,
there's
gonna,
be
this
point
in
time.
Where
let
me
create
these
proposals,
we
need
to
get
all
of
them
in
place
and
then
we're
going
to
kind
of
rationalize
them
across
all
of
the
different
work
streams.
A
So
I
kind
of
wanted
to
go
into
the
two
proposals
that
we
outlined
last
time
we
started
creating
a
document.
I
am
terrible
at
naming,
so
I
will
gladly
admit.
I,
don't
know
if
Danny's
on
the
call.
If
you
would
like
to
call
it
something
else,
speak
now
or
forever
hold
your
peace.
Would
you
prefer
builder
I.
A
A
A
So
I'll
just
read
it
out
loud
and
feel
free
to
interrupt
or
whatever
questions
comments,
complaints
concerns,
it's
become
a
kind
of
place
for
modern
diplomas
to
go
cloud-based
infrastructure
to
creating
beautiful,
we're
based
deployments
to
create
a
mutable
infrastructure
all
right
there.
This
seems
sentence
needs
to
be
ruined.
A
There
exist
several
examples,
this
already
in
the
creation
system.
There's
there's
a
number
of
utilities
that
do
the
similar
things
wardroom
clustered
aviary
is
the
vSphere
provider,
the
Amazon
UK
sa
VMI
Telus
Linux
get
excited
rep.
The
purpose
of
the
proposals
are
still
encounter.
Requirements
provided
consistent
standard
image,
image,
building,
building
utility
that
can
be
leveraged
by
hire
to
system
such
as
cluster
API
across
providers.
A
G
Yes,
I
saw
that
this
was
referencing,
the
reproducible
builds
org,
which
has
much
more
details
around
it,
and
so
one
of
the
things
I
at
least
want
to
do
get
clarity
on
is
that
do
we
expect
different
teams
trying
to
run
these
builder
steps
potentially
at
different
times
to
have
the
same
output
and
that's
going
to
throw
that
deterministic.
Nice
flows.
A
Across
per
by
I,
think
reproduce
really
should
be
that
output
should
be
reproducible,
I,
don't
think
you
can
guarantee
deterministic
behavior,
especially
across
output,
artifacts
right
the
the
end
results
of
the
the
image.
The
overall
image
should
be
reproducible,
so
they
didn't
want
to
put
a
clause
there.
A
G
E
Think
so,
I
think
the
bits
that
we're
specifically
laying
down
as
part
of
the
image
building
process
so
anything
related
to
kubernetes
or
the
dependencies
that
we're
necessarily
explicitly
installing
should
be
the
same.
You
know
across
one
but
things
further
down
into
the
pendency
chain
or
debates.
The
last
bit
should
not
necessarily
be.
D
A
G
A
Don't
really
like
that
guarantee
your
repeatable
I
think
the
very
for
a
given
version.
It's
more
than
repeatable,
like
there's
a
guarantee
that
we
provide
for
the
software
that
we
are
installing
that
it
should
be
of
the
version
or
the
specific
artifacts
that
the
install
should
be
guaranteed
to
be
the
versions
that
we
specify.
B
A
It
that
guarantees
based
upon
the
version
of
kubernetes
the
diversion
of
the
utility,
that's
used
for
kubernetes
at
that
time,
which
can
vary
like
if
you
use
committee
'm.
You
know
a
guaranteed
window
of
time
based
upon
the
committee
of
config,
but
if
you're
using
some
other
utility
to
install
there's
no
guarantee
for
original
semantics.
A
A
F
I
F
A
F
B
F
Mutable
infrastructure,
the
goal
of
the
user
having
intent
and
have
it
consistently
applied
across
different
cloud
providers
or
infrastructure
to
service
providers.
That's
the
same
goal
that
I
think
some
of
us
have
or
image.
Stamping
should
also
be
equivalent
to
run
one
configuration
insofar
as
the
user
has
an
intent.
We
want
to
ensure
that
there's
some
sort
of
mechanism
that
is
tested
it
ensures
that
intent.
We
found
this
reality.
Yes,.
A
Second
goal
would
be
to
allow
publishers,
the
ability
to
install
arbitrary
software
packages,
containers
or
other
software
needed
for
that
deployment
right.
So
we
different
providers
like
we
want
to
provide
a
utility
that
could
be
used
not
only
by
the
kepi
providers,
but
to
be
augmented
by
consumers
of
the
kaepa
providers.
So
if
a
person
has
a
specific
sort
of
metrics
monitoring
other
tools
that
they
wish
to
use,
we
should
allow
that
capability.
J
J
K
Yeah
I
kind
of
I
kind
of
think
that
we
should
not
build
it
off
that
we
should
use
some
known
image,
because
one
of
the
things
that
I'd
like
to
put
in
is
getting
to
be
able
to
do
is
keep
track
of
the
manifests
that
built
the
image
from
so
that
in
user
can
introspect
the
the
image
and
see
where
did
these?
Where
did
this
lets
come
from?
Well.
A
Now
again,
I'm
terrible
at
words
nothing.
So
if
somebody
wants
to
walk
through
this
and
clarify
the
language,
I
know
we're
on
the
call.
There's
there's
this
idea
that
in
the
live
context
and
then
there's
explicit
ways
of
articulating
the
the
requirements.
I'm
all
ears
are
game
for
anyone
who
wants
to
help
with
making
things
more
explicit.
A
K
Thank
you
so
there's
two
other
things
that
I
think
the
long-term
we
want
to
address:
I'm,
not
sure
that
falls
under
the
the
goals
of
the
image,
stamping,
because
this,
if
I,
consider
this
like
the
image
building
process.
But
you
know
when
we
store
these
images
and
how
do
we
get
the
images
into
the
infrastructure.
D
J
Your
images
and
open
that
hasn't
been
the
case
for
a
very
long
time.
I
am
eyes
HBM,
most
almost
all
instances
you
be
today,
HBM
and
I,
just
a
standard
that
that
standard
disk
image,
but
on
next
three
buckets.
Okay,
thank
you.
J
A
That's
a
question:
do
we
want
to
put
the
constraints
on
at
least
for
the
time
being
on
Windows
support,
given
the
fact
that,
like
we
don't
actually
have
windows
support
plumbed
all
the
way
through
I,
don't
think
we
should
whatever
our
tooling
we
create,
should
have
that
as
a
future
requirement.
But
do
we
want
to
make
that
a
requirement
now.
E
A
E
J
E
D
K
K
A
A
K
K
E
A
I
can
create
local,
am
I,
so
been
able
do
that
for
years
it
doesn't
mean
that
you
can't
really,
you
can
use
them.
If
you
want
to
there's
weird
ways
you
can
get
around
it
to
use
it,
but
the
publication
is
a
separate
step.
What
I'm
saying
is
that
it's
not
a
responsibility,
this
tool
to
do
that.
Push
it's
up
to
you
to
figure
out
how
you
want
to
do
that.
Push
we.
C
A
A
G
Theoretical
question
on
that
provider
Stanford
boundary
and
is
there
any
place
where
those
sorts
of
discussions
will
happen?
So
if
these
outputted
files
are
going
to
be
given
to
providers,
was
there
still
need
to
be
some
sort
of
comment
in
your
face
so
that
dividers
know
how
to
use
them?
Are
we
going
to
assume
that
they're
in
the
target
format,
or
do
providers
need
to
do
some
validation,
making
sure
they
understand
those
files
that
they're
going
to
try
to
import
so
he's
sort
of
extra
metadata?
We
should
be
worried
about
I.
A
Think
we
could
put
that
as
extra
requirements
if
we
want
to
about
like
if
we
need
to
insert
metadata
but
I,
don't
I
don't
know
if
we
need
to
qualify
constraints
at
this
time.
I
think
this
is
more
like.
This
is
one
of
those
weird
keps
where
it's
like
we're
not
going
to
know
all
the
details
until
we
actually
really
start
implementing
this
I
just
want
to
make
sure
that
we
have
explicit
boundary
lines
around.
What
are
the
high
level
goals
and
what
are
the
high
level
non
goals
like?
Where
does
this
tool
stop?
A
H
Well,
will
the
tool
itself
bill
to
enforce
the
reproducible
aspects
of
it,
I
mean
if
we're
saying
that
you'll
just
generate
a
shot
and
that
will
define
it.
That's
fine,
but
I
wasn't
sure
if
the
tool
itself
would
be,
you
could
run
it
twice
and
then
the
tool
itself
could
tell
you
that
yep
it
built
exactly
the
same
thing.
The
second
time
we
said
repeatable
like.
H
A
E
A
That's
the
base,
I
think
there's,
there's
a
speck
and
then
there's
the
status,
so
I
think
we
can
I
think
we
can
delineate
the
two
like
Providence.
This
basically
says
here's
what
we're
specifying,
but
we
should
also
say
like
I,
don't
know,
okay
feel
free,
feel
free
again
to
like
rearrange
this,
but
I
want
to
make
sure
that
we
we
touched
both
points
that
there's
a
specification.
Then
there's
the
the
output
artifact
generation,
which
allows
you
to
say
like
did
I
get
exactly
what
I
specified
so.
K
One
non
goal:
I,
don't
know
if
this
should
be
an
and
or
not
or
whether
we
even
need
to
save
it
or
not.
The
two
is
just
Builders
right,
and
so
we
we
don't
need
the
tool
to
guarantee
that
the
the
node
actually
boots
up
and
I
mean
all
the
software
compatible
ring
yeah
yeah.
So
maybe
we
need
to
do
to
get
specified
that.
H
A
I
do
know
that
most
Road
in
this
implementation
example
he's
not
here
in
the
call
he's
kind
of
running
asynchronously
and
some
of
this
I
had
enough
time
to
really
dig
through
the
details
here,
but
I
wanted
to
like
circle
back
to
it.
I
don't
know
if
I'm
educated
enough
to
talk
to
it
right
now,
but
the
high
level
objective
that
I
want
to
accomplish
for
this
meeting
was
to
make
sure
that,
like
we
agreed
upon
a
set
of
goals
and
the
set
of
non
goals,
I.
C
C
A
I'm
gonna
defer
on
this
particular
one
I'm,
not
gonna.
How
could
I
read
through
it
again
later
on
because
most
came
in
here
kind
of
added
this
synchronously
I
I
do
think
that
there
are
there's
bigger
questions.
I
want
to
ask
right
now,
in
particular
like
one
of
the
big
questions
I
want
to
ask.
Is
that,
like
Packer,
pretty
much
does
a
lot
of
the
things
that
we
have
asked
their
requirements
like
it
has
all
the
capabilities
that
you
know
of
having
a
separate
software
installation
tool
that
we
can
use.
A
A
Because
people
I've
heard
a
bunch
of
constraints
that
people
have
talked
about
that
they
haven't
given,
they
haven't
actually
written
them
down.
So
I
want
to
write
them
down
and
be
explicit
about
it
like
what
are
the
constraints
with
packer.
That
says
that
we
should
not
or
cannot
use
packer
for
this
particular
type
of
tools
as
one
of
the
things
that
to
solve
our
problems,
I.
I
A
Yes,
exactly
I,
don't
I
don't
want
to
build
a
tool
if
I
don't
have
to
I'd
rather
use
another
tool,
and
if
we
need
to
publicly
shame
people
to
use
them
to
get
their
media
requirements,
and
we
do
that
and
we
do
whatever
we
can
to
make
that
happen.
But
the
question
I
want
to
ask
the
group
here
is
like:
why
can't
we
use
backer?
What
are
the
constraints
that
that
specifically
do
not
allow
us
to
use
backer.
D
Yeah,
so
we
just
went
through
the
process
of
implementing
a
Packer
plugin
for
our
stuff
and
I
think
abstraction
that
they
have
is
actually
really
good.
So
at
some
point
you
need
some
clean
environment
to
install
things
so
I
don't
have
any
I,
don't
think
I,
don't
have
anything
negative
to
say
here.
I
just
get
curious
to
know
why
people
don't
like
Packer.
K
E
A
K
E
A
Software
engineers
like
to
stop
for
engineer
yo
but
like
I'd,
rather
use
tools
that
are
well-established
in
the
community
and
make
sure
that
they
meet
their
requirements,
and
it's
got
that
bifurcation
split
that
we
talked
about
like
you,
could
have
an
answerable
script
for
doing
software
installation
and
then
the
Packer
has
a
multiple
extension
module
for
you
to
create
our
output
artifacts
for
a
number
of
different
tools.
The
one
requirement,
though,
that
Packer
does
place
on
on
you,
is
that
it
requires
hypervisor.
A
K
A
A
million
other
CI
tools,
we
don't
need
to
use
proud
prowess
at
reduction,
so
screw
it
running
on
kubernetes,
and
it
gets
weird
right
like
I,
could
see
how
this
requirement
of
all
the
different
incantations
of
different
hypervisor
running
would
be
weird
right,
but
that's
not
to
say
that
we
couldn't
create
an
ami
that
does
all
the
things
right.
It
has
like
a
drinkin's
utility
or
you
name,
a
different
like
a
circle
CI
instance,
or
something
like
that.
I
don't
know.
K
E
We
would
have
some
challenges,
especially
if
we're
talking
about
like
a
paid
OS
subscription
like
rel,
where
we
would
have
to
as
part
of
the
image
build
process.
If
we
wanted
to
update
for
security
fixes
and
things
like
that,
we
would
have
to
register
that
system
to
be
able
to
pull
those
updates,
and
then
we
would
have
to
ensure
that
before
we
kind
of
wrap
up
the
image
we
clean
up,
you
know
that
subscription
content,
but
I
don't
think
there
was
anything
that
would
void
any
Trax.
D
I
think
this
is
where
a
lifecycle
hook
might
be
like
provide
licensed
material
for
things.
You
know.
A
Your
plugin
for
your
provisioner
could
do
pieces
of
that
to
you
so
as
part
of
your
ADA
to
plug
in
for
a
provisioner,
you
should
be
able
to
do
that
individual
check,
so
it's
possible
that
we
don't
use
default
provisioners
and
that
we
have
like
I
made
a
step,
and
that
seems
reasonable
to
where
we
do
a
set
of
checks
or
guarantees
before
we
actually
tick.
It.
A
A
K
K
E
Don't
remember
so
we
went
for
this
when
we
were
trying
to
build
out
the
tool
for
the
AWS
provider
and
we're
trying
to
get
kind
of
the
images
that
we're
publishing
into
a
CNC
F
owned
account
rather
than
a
you
know.
Hefty
Oh,
slash,
VMware
owned
account
and
I
got
a
lot
of
vague
responses
from
people
that
there
were
some
non
enumerated
security
issues
that
people
had
related
to
packer,
but
I
never
got
anything
concrete
back.
C
D
D
A
A
Well,
I
didn't
take
a
look
at
the
time,
we're
actually
at
time,
so
my
plan
is
to
have
hopefully
have
this
thing
more
sussed
out.
So
if
you
have
questions
comments,
complaints
concerns,
please
add
them
to
this
document.
I
think
we.
This
is
a
good
conversation,
I'm
going
to
go
through
moshus
proposal
here
in
a
little
more
detail
and
try
to
see
if
you
can
tease
apart
some
of
the
details.
A
But
if
you,
if
folks,
can
take
a
look
at
the
requirements
and
the
goals
I
think
that's
the
primary
function
and
also
take
a
look
at
what
constraints
are
not
being
met
by
a
packer
today
that
that
would
also
be
helpful,
and
hopefully
we
only
have
one
more
meeting
scheduled.
Hopefully
we
can
have
enough
content
in
our
documents
that
we
can
start
to
do
things
a
little
more
recently.