►
From YouTube: Cluster API Provider Office Hours 20190128
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
January
28th
edition
of
the
cluster
API
provider
AWS
office
hours.
If
you
have
any
additional
items
that
you
want
to
add
to
the
agenda,
please
go
ahead
and
do
so
now
and
also
add
yourself
to
the
attending
list,
and
it
looks
like
Chuck's
already
added
a
link
to
the
agenda
in
the
zoom
chat,
all
right
to
start
with
the
current
status.
Since
our
last
meeting,
we
no
longer
have
a
blocker
on
kubernetes
version
1.13.
We
recently
had
a
PR
from
Vince
come
through
to
update
the
dependencies.
B
Just
wanted
to
get
a
readout
of
where
we're
at
with
the
test
automation
as
well
as
artifact
generation,
and
whether
or
not
we
need
to
just
do
or
pull
the
trigger
on
doing
your
own
thing
just
to
get
to
the
point
where
we
get
v1l
phone
out
the
door.
So,
oh,
it's
more
of
a
question
than
a
comment
for
Liz
that
our
everybody
else,
I
sort.
C
C
C
I've
got
a
I've
got
an
issue
that
that
shows
the
state
of
this
day.
I
can
add
it
to
the
document,
but
that's
the
upshot
is
we
still
need
to
glue
everything
together,
get
the
new
credentials
into
boss
coats
and
convince
our
test
automation
to
talk
to
Roscoe,
and
then
we
need
to
test
it
a
whole
bunch,
but
we're
going
to
need
to
test
it
a
whole
bunch
regardless.
C
B
B
There
is
separate
machinery
for
that.
That's
outside
of
probably
but
I
mean.
Do
we
have
a
hosting
location
for
because
we
can
always
kick
a
crowd
job
many
different
ways.
If
we
can
see
the
automation,
we
can
kick
it.
The
the
question
is:
do
we
have
a
place
for
all
of
the
artifacts
when
I
generate,
and
this
includes
a
Mis?
Do
we
have
that
story
done
or
thought
about
completed?
Yet
we.
D
D
B
D
F
F
An
idea
is
that
the
simplifies
my
gigantic
PR,
that
I've
got
but
I
do
is
there
they
run
some
commands
and
it
will
spit
out
some
customized
templates
and
then
they
can
fade
out
in
T,
plus
the
cattle
and
he'll
be
version
with
the
binary
and
embedded
in
it.
F
A
A
G
No,
but
from
the
time
timeline
is
still
being
undecided.
I
think
we
will
need
to
discuss
this
like
next
week,
also
because
it's
like
a
lot
of
work,
so
we
need
to
see
what
they
accept
like
you
know,
how
do
we
split
the
work
and
then
like
Ward
goes
in
which
milestone
and
who
accept
the
work,
but
apart
from
that,
I
love,
more
comments
on
it
and
I
were
to
visit.
Probably
today
in
tomorrow
and
presenting
on
the
next
meeting,
Wednesday
I
believe.
B
So
just
sure
my
hot
take
I'm
pretty
adamantly
opposed
to
breaking
up
the
API
group
for
v1
alpha
one
I'm,
totally
fine
with
it
in
a
post
view
in
alpha
one
world.
The
biggest
concern
I
have
is
and
I'll
to
bring
this
up
in.
The
other
group
is
the
potential
fragmentation
problem
that
can
exist
because
everyone
will
have
to
rebase
it'll,
be
a
non-destructive
level
set
of
changes.
B
G
And
the
teacher
doesn't
have
to
be
disruptive
at
first,
we
can
take
some.
You
know
like
this
is
like
a
kind
of
where
we
want
to
go
right,
like
on
your
on
your
point,
because
I
see
how
this
can
impact
a
lot
and
like
a
lot
of
providers,
and
just
like
did
rail
like
a
p1,
I
wanna,
and
that's
not
the
goal
of
a
PR.
G
But
it's
a
good
way
to
like
get
an
agreement
in
the
community
like
what
we
want
to
do
where
we
want
to
go,
and
it
will
resolve
the
problem,
for
example,
that
we
have
right
now
like
having
only
one
cluster
birth,
namespace
and
and
say
like
okay,
like
we
don't
want
to
like
split
apart
the
cluster
and
machine
a
trader
for
now,
because
this
is
where
we
were
gonna
go.
And
so
we
can
fund
that
word
for
later.
So
I
completely
agree
and
I
think
there
is
some
ways
that
we
can
include
some
changes.
G
B
G
Groups,
so
that's
a
good
question,
so
I
think
even
from
a
cluster
API
AWS
provider
perspective,
we
would
benefit
of
having,
for
example,
like
an
infrastructure
actuator
which
is
going
to
be
separate.
Given
that,
like
a
de
like
right
now
like
at
the
in
the
current
way
of
things,
we
have
the
cluster
actuator,
which
will
create
the
infrastructure,
and
the
next
point
that
I
was
gonna.
Bring
up.
G
So,
right
now
like
that,
the
spec
that
I
that
I
put
out
it's
kind
of
I,
don't
want
to
say,
I
had
a
full
hack,
but
it's
more
like
how
do
we
fit
this
things
in
and
if
you
can
have
a
way
to
like
split
that
apart
I
think
that
would
be
much
cleaner
like
a
longer
term,
and
it
will
help
resolve
the
issue
of
like
how
many
clusters
we
have
in
a
single
namespace.
What
defines
a
unique
cluster
and
what
is
the
relationship
between
the
cluster
machine
and
control
planes
I,
think.
B
We're
conflating
API
groups
in
the
separation
of
the
API
groups,
with
the
constructs
of
what's
currently
being
done.
I,
don't
think
that
the
separation
of
the
groups
themselves
really
has
I,
don't
think
any
of
the
problems
that
you
previously
stated
can't
be
solved.
Currently,
if
the
api's
are
well
structured
enough,
the
API
grouping
comes
with
its
own
set
of
constraints,
but
again
we
can
defer
this
there's.
B
A
G
This
is
I,
think
the
issue
267
to
bring
support
existing
VP's
infrastructure.
This
is
a
lighter
change.
It
just
brings
a
concept
of
a
managed
and
unmanaged
network
scenario.
We
still
will
create
security
groups,
but
we
won't
create
vbc,
subnets
routing
tables,
so
pretty
much
like
all
the
primitives,
but
the
goal
is
like
make
it
like
simple
enough
to
like,
say:
I
want
a
cluster
in
this
V
PC
and
market,
as
managed
by
like
a
convention.
So
if
the
ID
is
there,
it's
gonna
be
pretty
much
managed.
G
A
G
A
G
I
think
this
is
a
good
that
goes
pre
alpha,
because
I
mean
this
is
probably
gonna,
be
one
of
them
like
super
like
first-class
Ellison.
It
could
be
here
on
network
spec
and
I.
Think
I
fixed
all
the
comms
at
Jason
that
you
made
on
it.
I
will
replace
the
PR
and
make
sure
that
it's
up
afterwards,
but
if
anybody
else
has
feedback
or
user
stories
that
they
want
to
share.
That
would
be
great.
H
A
We
add
an
additional
tag,
so
we
do
put
the
kind
of
the
shared
management
tag
which
is
used
by
the
cloud
provider-
integration
as
well
as
other
tooling,
like
hops,
but
we
also
added
an
additional
tag
to
be
able
to
potentially
differentiate
in
cases
where
there
may
be.
You
know
some
other
tooling,
managing
resources
as
well.
A
H
H
A
G
As
well
yay,
so
this
is
an
important
improvement.
We
thought
about
this
I
think
like
a
few
months
ago,
so
another
issue
came
up
like
about
cleaning
up
resources.
A
cluster
Carlos
spun
up
I
was
wondering
like
to
what
extent
do
we
wanna
like
clean
up
dangling
resources,
and
if
we
should
do
this
in
v1
alpha
one
stage
as
far
as
the
other
points
in
this
PR
I
think
they're
all
actually
covered
I.
The
tags
are
actually
at
important
now,
like
after
the
two-two-one
8
when
went
in.
G
G
I
This
is
Andrew
I'm
kind
of
new
to
this
call
and
I'm
just
trying
to
listen,
but
I
do
a
lot
of
the
similar
workflow
astir.
Another
project
and
all
I'll
say
is:
most
people
probably
have
their
own
cleanup
scripts,
so
it
probably
wouldn't
be
too
hard
to
get
from
there
to
what
you're
talking
about
events
simply
because
if
you
do
the
dev
test
cycle
enough,
eventually
you
break
things
right
and
you
have
all
these
worth
and
resources
out
there.
So
we
all
these
variations,
have
clean
up,
reconcile
or
destroy
force.
I
G
A
A
B
Yeah
Liz's
option
that
she
talked
about
earlier
is
very
nuclear.
It's
like
a
semi
periodic
account.
Cleaning
nuclear
in
a
good
way
right
well
finds
an
ends
on
your
definition
of
good.
It's
like
it's,
not
it's,
not
the
ideal
world
and
I
think
there's
an
AWS
problem
more
than
anything
else,
right,
yeah.
C
C
Aws
just
doesn't,
and
so
we're
left
with
the
the
two
options
are
basically,
you
know
the
sort
of
classic
journaling
garbage
collector,
where
you
keep
track
of
every
single
resource
as
allocated
or
the
nuclear
option
of
you
terminate
the
program
to
stop
memory
leaks
and
those
are
those
are
really
all.
You
have
a.
H
I
mean
so
cops
will
try
to
clean
up
after
itself
and
will
delete
any
tagged
resources,
including
ones
that
are
created
by
the
cluster.
But
we,
in
addition
in
CI,
have
a
sort
of
the
AWS
janitor
script,
which
the
nuclear
option
sounds
good
compared
to
this.
But
here
we
are
every
hour
I
believe
it
scans
all
the
resources
writes
them
down
and
if
they're
still
there
in
I
think
two
hours
or
four
hours,
it
deletes
them.
So
the
argument
is
no
test
runs
for
more
than
two
hours,
so
we
can.
G
C
C
But
in
in
our
case,
we
don't
really
like
this
is
going
to
be
the
primary
means
for
cleaning
up.
Part
of
this
is
also,
though,
just
reading
through
the
AWS
janitor
codebase.
It
looked
to
me
like
cops,
kept
a
journal
because
I
saw
it
reading
objects
out
of
s3
and
I
think
that
was
just
me
not
understanding
where
those
s3
objects
came
from
yeah.
C
H
There
are
some
hard-coded
things
like
we
ignore
certain
tests
that
we
know
take
longer
than
two
hours
like
I,
think,
there's
a
stress
test
for
example,
and
like
when
we
clean
up
DNS
names.
We
try
not
to
delete
the
DNS
names
that
we're
really
using,
for
example.
So
it's
it's
not
break
code,
but
it
it
is.
It
works
and.
G
Could
I
guess
my
question?
It
relates
to
volume
right
like
if
you
have
one
of
these
tests
per
PR,
you
could
potentially
end
up
with
a
bunch
of
you
know
that
resources
are
from
resources
in
the
back
and
in
by
my
clever
the
questions
come
in
it.
The
new
tests
coming
in
for
like
resource
limits
and
stuff
like.
H
B
I
Instead
of,
in
addition,
the
tags
are,
whenever
possible,
in
AWS,
for
our
CEO
I
also
create
objects
with
identifying
information
in
the
actual
object
name
like
a
load
balancer,
for
example,
so
the
photonic
still
get
applied.
There's
still
some
amount
of
filtering
I
can
do
I,
don't
know
if
that's
a
noose
here,
but
no
tags
don't
always
get
applied,
but
if
an
object's
created
it
always
has
a
name
if
it
allows
any
type
of
custom
name.
The.
C
I
C
I
think
we
do
store
some
of
this
information.
It's
just
that
the
I
think
Vince
was
talking
about
this
earlier.
The
delete
functionality
for
a
cluster
is
just
not
a
hundred
percent
there.
Yet.
H
And
we
do
try
to
do
that,
for
example
on
cops
where
every
job
has
a
name,
we
can
figure
out
a
unique
name
from
it,
based
on
either
the
PR
number
or
the
show
that
we're
building,
and
so
that
cops,
for
example,
sets
up
a
DNS
name,
which
is
obviously
a
singleton
and
it
more
or
less
works.
There
are
inst
one
of
the
major
causes
of
leakages
in
the
cops
tests.
H
C
H
C
We
avoid
that,
and
as
far
as
I
understand
it,
I
have
plenty
of
people
who
are
willing
to
provision
basically,
as
many
AWS
accounts
as
we
want.
So
once
we
once
we
populate
Bosco's
with
those,
we
should
have
a
lot
less
interactions
with
between
accounts
and
the
interactions
we
do
have
will
be
a
lot
more
obvious
because
it'll
be
hey.
I
couldn't
check
out
an
account,
not
hey,
something's,
weird
about
this
account
so
but
yeah
we
can.
We
can
take
this
offline.
C
A
H
A
Yeah,
as
he
said
in
the
comments
alb,
we
threw
out
because
of
TLS
reasons,
we're
using
it,
for
you
know
all
of
the
API
communication,
so
that
throws
away
kind
of
declined.
Certain
authentication
there
MVNO
be,
as
nadir
said,
was
mainly
around
hair,
pinning,
especially
when
using
a
private
subnet.
It
just
doesn't
work
out
of
the
box
right
now
and
nobody's
troubleshot
down
far
enough
configuration
to
make
it
work
in
that
environment.
So
the.
H
Other
thing
I'm
aware
of
with
NLP,
is
that
if
you
have,
you
have
one
IP
per
zone
and
you
have
to
manage
your
you're
sort
of
failures,
automatic
or
self.
If
you
lose
all
the
backends
in
a
zone
which
is
not
unlikely
with
masters,
there's
lots
going
to
chat,
but
yeah
I
will
callate
some
of
the
stuff
in
chat
as
well.
H
A
E
Yeah
so
Tim,
Tim
and
Tim
approached
us
about
a
week
ago
about
the
machine.
Stamping,
apparently
you
guys
have
a
machine,
stamping
process
and
there
is
a
desire
to
make
it
more
Universal
we're
we're
also
doing
using
a
machine
stamping
for
for
the
vSphere
provider.
And,
let
me
let
me
first
up
before
I:
go
in
and
explain
the
machine,
stamping
so
there's
two
where
I'm
going
to
discuss
the
vSphere
cluster
API
provider
and
and
I'm
going
to
talk
about
another
project
that
we've
done
in
the
past.
E
There's
actually
two
artifacts
that
need
stamping
one
is
the
Installer
OVA
I
mean
that
that's
that
actually
becomes
a
VM
when
you
install
it
another
is
the
what
you
would
call
the
know,
image
which
I
think
that's
what
you
guys
are
mostly
stamping?
Okay.
So
today
in
the
vSphere
cluster
API
provider,
why
don't
I
share
my
my
screen
and
because
their
mission
screen?
E
The
cluster
API
vSphere
provider,
right
now
we
have
this
a
folder
here
called
Installer
and
inside
of
this
installer
you,
you
simply
just
want
to
make
another
parameter
and
it
will
build
you
a
it
will
stamp
out
and
installer
OVA
and
this
process
is
purely
container
base.
We,
we
don't
use
any
external
tools
packer
and
we
don't
require
a
hypervisor,
it
ones
completely
inside
a
doctor,
but
the
the
so
like
I
said:
there's
two
artifacts
of
needs
to
the
stamp
right,
the
Installer
OVA
and
then
the
node
image,
and
today
the
node
image.
E
We
do
have
a
stamping
requirements
there
and
let
me
explain
what
it
is
that
we
need
the
stamping
of
the
node
image
today
today
we
we
rely
on
the
users
to
upload
a
cloud
and
it
image
into
vSphere
and
create
a
VM
template
from
from
that.
However,
there
are
some
internal
projects
that
are,
that
requires
a
custom
cloud
image
today,
we're
using
a
cloud
knit
image
that,
where
we
customize
so
that
we
can
pass
data
between
the
VM,
the
resulting
VM.
E
During
the
the
startup
process
and-
and
so
the
the
question
comes
up,
how
do
we,
where
do
we
store
that
image?
And
how
do
we
create
that
that
node
image
right
now,
the
other
project
they're
building
it
themselves
and
they're,
bundling
it
with
the
OVA-
is
not
the
most
ideal
situation,
because
you
know
how
you
know
you
get
into
the
question
of
how
do
you
upgrade
that
note
image
so
I.
E
E
Without
further
delay,
I
can
actually
just
demonstrate
you
guys
this.
So
let
me
at
first
explain
so:
we
I
worked
on
a
previous
project
called
Vic
vSphere
integrated
container,
and
what
that
was
is
was
a
1time
that
runs
on
vSphere
kind
of
like
cata,
container
and
windows
containers.
All
containers
are
one
as
VMS
for
stronger
isolation
and
it
doesn't
require.
We
don't
have
darker
bits
running
underneath.
E
So
while
it's
doing
that,
let
me
let
me
explain
the
process
okay,
so
if
you
want
to
run
a
container
on
vSphere,
you
just
use
docker,
but
you
point
your
docker
host
to
to
Vic
and
whenever
you
run
a
container
it
pulls
down
the
image.
It
extracts
out
all
the
layers
inside
the
image
and
then
it
stamps
being
the
case
for
each
image
and
then
and
then
you
can
run
the
the
containers
of
VM.
So
let's
go
ahead
and.
F
E
E
E
E
Basically,
automate
stamping
of
machines
from
a
docker
container.
So
if,
if
you
have
something
like
a
I
assume,
you
know
what
kind
is
it's
a
it's
a
project
that
runs
a
kubernetes
cluster
inside
of
the
opera?
So
if
you
have
something
like
Vic,
you
can
actually
run
the
kubernetes
cluster
by
just
one
in
kind
on
thick,
and
it
does
all
the
stamping
for
you.
Let
me
explain
at
the
end,
I'm
gonna
I
just
want
to.
B
Think
I
see
where
you're
going
with
this
you're,
basically
Auto,
creating
images
directly
from
auto,
creating
VM
images
directly
from
docker
containers.
Yes,
and
if
we
do
docker
automation,
just
like
a
standard
docker
file,
we
can
get
the
latest
up
to
greatest
thing
just
by
publishing
a
docker
image
and
then
use
that
to
auto
stamp
yeah.
E
So
this
was
an
idea
that
I
proposed
the
victim
when
I
left
them
that
they
they
could
actually
do
a
easily
create
a
kubernetes
cluster
by
running
a
bunch
of
images
that
then
uses
Kubb
ATM
to
bring
up
the
the
bootstrap
the
nodes
and
by
doing
something
like
this,
you
can
actually
have
multiple
images
of
kubernetes
on
your
on
your
running
environment.
So
look,
there's,
there's
a
couple
of
benefits
to
this.
First
is
that
the
stamping
is
automated
yeah
and
second,
is
the
images
are
stored
somewhere
and.
B
E
A
A
Similar
to
Linux
kit,
yeah
yeah,
one
main
concern
that
I
would
have
is,
is
if
we
did
try
to
adopt
this
type
of
approach
for
AWS
would
be.
Would
we
be
losing
any
type
of
kind
of
AWS
optimizations
that
are
present
for
kind
of
the
OS
base
images
that
would
not
necessarily
be
present
and
kind
of
like
a
generic
base
image
for
an
OS.
F
So
I
think
James
Newton
did
some
similar
stuff
at
joy
in
and
added
some
stuff
to
Packer,
to
enable
some
similar
work.
First
couple
of
issues
you
ultimately
do
need
to
spin
up
an
ec2
instance
to
white
image.
There
is
a
VM
import
thing
that
aw
Spore,
unfortunately
actually
use
that
to
spin
up
normal
ec2
instances,
so
there
would
still
be
something
that
looks
like
Packer
or
Amazon
inspector
Amazon
systems
and
simple
assistance
manager
to
do
that
bit
and
think
that
the
other
main
concern
would
be
things
around
system,
D
stuff.
E
F
Alright,
quite
possibly
yeah,
so
only
certainly
right
and
there's
gonna
be
an
educator
eyeful
bite
some
on
one.
E
Figured
out
without
system
D,
there's
exactly
what
we're
doing
today
for
building
the
OVA
delacroy,
showing
you
on
kept
fee,
we're
actually
using
just
standard
script
files
on
the
notes
machine
to
build
those
machine,
the
run
system
v.
So
you
can
totally
do
that.
There's
a
bunch
of
you
know
things
you
have
to
be
aware
of,
but
you
know
all
in
all:
it
works
I.
B
Did
I
think
Jason's
question
is
kind
of
left
unanswered
because
cloud
providers
or
OS
providers
typically
do
customize
their
cloud
images
to
be
tailored
with
specific
optimizations
for
the
providers.
The
question
is
like
we're:
gonna
have
to
I,
don't
know.
If
there's
we
have
to
somehow
baked
into
that
that
information
into
the
docker
files
or
doctor
configuration
to
make
it
seamless
unless
anyone
knows
if
that
exists
already.
If.
C
E
H
I
would
I
would
hope
that
we
would
as
I
imagine
it.
We
would
have
different
docker
build
files,
as
it
were
afraid
of
us
versus
GCP,
because
there
are
different
drivers,
but
the
bulk
of
that
work
should
be
identical
and
then
there
a
couple
of
lines
at
the
end
which
are
like
installed.
The
AWS
optimized
network
driver
versus
I
shouldn't
know
if
CCP
has
enough
lights
that
worked
over
but
like
likes.
Let's
pretend
versus
install
the
GCP
optimized
network
driver
well.
A
C
Not
really
a
guarantee
it'll
apply
cleanly
as
a
layer
like
maybe
they're
nice
and
distributed
Kos,
but
like
quite
possibly
they're,
not,
and
you
know
we're
gonna
have
to
compile
those
Kos
ourselves
or
you
know.
Maybe
they've
done
some
weird
hot
patching
to
the
kernel
images
they're
building
like
I.
Don't
think
this
is
all
super
straightforward.
H
C
B
Doctor
dekapan,
sorry
I
think
the
model
is
very
clean.
It
gives
you
a
certain
little
idempotency
that
you
don't
guarantees
that
you
don't
get
as
well
when
you're
using
other
tooling,
but
that
there's
a
lot
of
edge
cases
here
and
don't
think
it's
fully.
There's
a
there's
more
there's
a
lot
of
questions
that
come
along
with
it.
I
like
the
idea,
if
from
idempotency
perspective,
I
think
I
like
it
a
lot
but
I,
don't
know
if
we
can
do
that
today
go
ahead.
Listen
my.
C
Question
my
my
my
thing
was
just
just
just
clarify
for
me.
We're
talking
about
the
a.m.
is
that
we
eventually,
if
stall,
on
the
nodes
that
were
provisioning
with
that
we're
provisioning
with
cluster
API
and
I,
think
that's
using
using
docker
to
build
VM
images
that
are
not
intended
to
run
in
docker.
That
I
don't
know
how
much
of
our
existing
knowledge
we
can
rely
on.
For
that,
like
is
that
something
that
we
are
already
doing
now?
Is
that
something
that
other
people
are
already
doing?
Linux.
J
K
I
just
wanted
to
add
that
you
know
when
I
do
mention
living
skit.
In
this
context,
with
necessity,
I
mean
you've.
If
I
were
to
suggest
to
use
Linux
kid
within
this
project,
I
would
say
you
could
actually
use
Linux
kit
without
using
Alpine
and
all
the
lemmings
dip
packages.
You
can
just
use
the
build
tools
and
create
your
own
packages,
which
could
be
fairly
monolithic
and
we
don't
have
to
use
continuity
and
all
the
other
bits.
K
A
So
for
most
of
the
basic
OS
providers,
I
think
somebody
would
have
to
manage
the
underlying
kind
of
base
OS
image
that
we
would
use
here,
because
all
of
the
existing
kind
of
container
images
aren't
meant
to
you
know
be
full
hosts.
Os's
Alpine
is
the
only
one
that
I'm
familiar
with
that
is
running
with
the
Linux
get
tooling.
Today,.
A
E
B
So
there's
two
hats:
there's
like
the
short
term,
Kappa
hat
and
then
there's
like
the
long
term,
sequestered
lifecycle
hat
like
from
a
Kappa
hat
I,
don't
think
it's
possible
in
the
short
term
to
address
migration
of
stamping
any
in
the
short
term.
Given
the
open
questions
in
the
long
term,
sequestered
lifecycle
hat
this
sounds
very
interesting
and
promising
and
I
would
love
to
see
more
work
in
the
open,
possibly
on
this,
because
it
has
the
potential
to
a
limit.
B
That
being
said,
like
you
know,
the
I
think
I
think
there's
possibility
here,
but
I
don't
know,
I,
don't
think
this
might
be.
He
coppa
itself
might
not
be
the
right
venue.
Even
though
I
asked
you
to
come
here,
I
and
I
appreciate
the
update,
I.
Think
it's
really
useful
and
I
think
this
is
more
generic.
It
could.
It
could
apply
to
everybody
if
we
get
it
right,
but
that
would
take
a
while.
That
would
be
rehab
to
be
resource
separately.
E
A
Custom
you
need
use
so
the
way
that
we've
looked
at
it
so
far
is
that
anybody
who
is
using
this
for
real
production
workloads
in
any
type
of
kind
of
non-trivial
environment
they're
going
to
want
to
provide
their
own
base
OS
images
anyway,
whether
it's
from
a
regulatory
standpoint
or
just
a
corporate
process
standpoint.
So
we
provide
the
tool
for
those
users.
A
F
I
would
say
previously
so
as
soon
as
you
want
to
use
encrypted
read
volumes,
you
need
to
copy
an
image
if
it,
even
if
it's
public.
So
typically
in
the
past,
where
I've
worked,
we've
maintained
the
set
of
images
for
customers
to
consume,
and
then
it's
copied
into
their
account
and
then
they
customize
it
further
for
use.
So
they
keep
it.
They
keep
the
timestamps
from
like
the
sorts
and
then
tailor
it
for
their
own
ends.
E
E
B
Yeah
I
appreciate
coming
to
talk
about
it.
Look
I,
think,
there's
I
think
for
missing
poster
lifecycle.
Parsnip
do
this
has
a
lot
of
possibilities.
I
don't
want
to
see
it
just
gonna
drop,
but
I
do
I.
Do
think
that
there's
gonna
be
a
time
window
for
when
we,
when
somebody
can
adopt
it
in
maturation
process.
B
These
open-ended
questions
kind
of
would
need
to
be
addressed
this
along
the
way
too,
as
well
so
I
think
if
I
don't
know
what
the
plan
is,
if
there
is
a
plan
about
stamping
but
I,
think
maybe
writing
down
a
doc
to
have
that
have
the
idea
in
place
and
then
we
could
circulate
in
sync
Westra
life
cycle.
If
you
wanted
to
to
see
if
there's
a
broader
audience
that
folks
are
interested
in
then
taking
along
on
this
one,
okay.