►
From YouTube: 20200520 - Image Builder Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
and
I
believe
we're
recording
so
welcome
everyone,
which
is
just
two
of
us
at
the
moment
today,
is
Wednesday
May
20th,
and
this
is
the
office
hours
for
the
image
builder,
a
sub
project
of
the
sig
cluster
lifecycle.
Just
quick
reminder
that
you
should
adhere
to
the
kubernetes
code
of
conduct.
This
meeting
is
being
recorded
and
will
be
posted
to
YouTube
at
a
later
time.
So
I
can
drop
a
link
to
the
agenda
and
chat,
though
I
suspect
everyone
has
it
and
then
what
am
I
doing
here
and
then
we'll
see
why?
A
Why
did
you
do
that?
There
we
go
and
we'll
take
a
look
at
the
agenda.
I
know
we're
gonna
start
off
with
a
demo
from
oh
Sh
on
the
latest
on
the
CLI,
which
I'm
very
excited
about.
So
let
me
drop
this
link
in
here
see
if
that
works,
for
anybody
and
I
also
see
Jason
has
joined,
welcome,
Jason
and
all
right,
so
yeah
I
mean
not
much
on
the
agenda
right
now,
like
I
said
we're.
A
Gonna
start
with
a
demo
from
oh
Sh
I,
had
a
question
and
related
to
the
CLI
about
some
stuff
that
I
have
coming
up
soon
and
then
we'll
continue
from
there.
So
if,
if
anybody
else
out,
there
would
like
to
drop
their
name
as
a
list
of
attendees.
Thank
you
and
any
items
that
you
want
to
add
to
the
agenda.
Please
feel
free.
It
is
like
today
and
I
will
stop
my
screen
share
and
pass
it
over
to
Moshe
stop
share
there.
We
go
good.
A
A
B
B
B
So
this
is
an
example
manifest
for
building
a
in
a
bun
to
image,
with
a
specific
version
of
kubernetes
and
container
D
on
on
Q
mu
and
then
resizing
the
input
disk
that
is
optional.
So
this
would
be
your
minimal
configuration
and
then
the
idea
is,
you
can
customize
the
input
and
the
engine
I
don't
know
about
the
naming,
so
I've
been
playing
around
with
image
driver
engine
and
it's
very
confusing
even
to
me
so
input
on.
That
would
be
great.
B
So
basically
we
have
three
different
types
of
engines,
so
we
have
a
docker
engine,
qmu
engine
and
pac
engine
and
that
basically
describes
how
what
is
going
to
do
the
configuration
of
the
image
and
then
your
input
is
the
image
type
itself.
So
not
all
images
are
going
to
work
with
all
images
or
like
on
a
docker
engine.
You
can
use
a
dock
image
on
a
qumu
engine.
You
can
use
a
cue
image
and
on
a
Packer
engine
you
can
use.
Potentially
all
of
us.
B
Sorry
this
would
be
like
the
Packer
version,
so
you
should
actually
so
this
would
be
a
a
Packer
engine
with
that
output
is
wrong.
So
with
an
input
of
cond,
am
I
and
then
these
will
still
work
out
where
you
should
go,
but
this
this
would
be
how
you
would
then
configure
the
the
Packer
Boulder,
but
what
it
actually
does
is.
B
So
you
have
for
every
every
OS
and
version
we
configure
each
one
of
the
input
types
so
on
community
you'll
get
an
image
on
an
icer
will
have
an
ISO
on
an
ami
will
know
how
to
find
that
ami.
So
essentially,
adding
support
for
20
or
4
should,
in
theory,
be
just
adding
another
manifest
like
this
and
then
updating
the
filters
and
the
locations.
So
we
have
one
of
these
for
every
OS
type.
So
we
have
a
Debian.
B
Centaurus
Amazon,
so
not
all
other.
The
input
types
have
been
defined
for
all
of
them,
but
when
we're
done
we'll
define
them
all
and
then
we
also
have
a
defaults,
which
then
applies
to
the
engine
so
like
on
VMware,
for
building
a
VM
exile,
so
you're
going
to
use
these
and
for
using
Amazon
EBS
you're
going
to
default
to
these
values.
So
what
it
does
is
it
takes
the
defaults
values.
B
Then
the
values
from
the
ISO,
for
example,
if
you're
building
on
VMware
with
VMware
I,
saw
it'll,
take
the
defaults,
the
ISO
combined
it
with
whatever
values
you've
put
in
here.
So
this
should
actually
up
one
level
and
then
spit
out
Packer
configuration
so
or
pack
a
bald
configuration
or
Boulder
configuration.
B
A
A
I
was
gonna,
ask
you
about
which
I
think
he
kind
of
touched
on
already
was,
since
you've
got
the
config
atom
stuff
at
the
top
level
right
now,
you're,
not
treating
that
as
optional
correct.
Like
you,
they
said
in
theory,
you
would
you
could
do
the
Packer
stuff,
but
I
think
we
had
talked
about
before
wanting
to
have
that
final
provisioning,
step
or
I.
Don't
know
whether
you
call
it
it's
more
configuration
than
provisioning,
but
that
would
be
like
kind
of
a
configurable
back-end
as
well,
where
it
can
be
configured
him
or
Packer,
plus
ansvil.
B
We
do
it
in
exactly
the
same
way.
We
just
switch
from
ansible
to
config
admin
and
then,
if
you
want
to
do
anything
on
top
of
that,
which
is
besides
for
installer
package,
install
kubernetes
and
container
D
and
assists
them
settings
and
whatever
it
is,
that's
when
you
would
plug
in
your
custom
ansible
what
I'm
wary
of
is
maintaining.
B
A
A
B
Sorry
it's
so
one
of
the
advantages
with
the
conflict
admin
is
that
the
the
kubernetes
container
D
portion
is
fully
integrated,
tested
using
docker,
so
I
have
matrix
test
setup
and
we
can
discuss
whether
we
move
configured
into
this
repo
or
not.
But
essentially
we
testing
all
of
those
installation
parameters
for
installing
continuity
on
a
bun
and
using
so
the
system
C
tails.
We
can't
really
do
but
a
lot
of
the
other
things.
B
B
B
Amazon
CLI
on
AWS
and
Amazon
Linux.
So
when
I'm
talking
about
duplication,
a
lot
of
that
I'm
talking
about
cheer
is
where
the
those
tag
selectors
and
do
this
on
that.
This
on
that
simplifies
down
to
almost
nothing
with
configuring,
whereas
with
with
ansible,
it's
quite
a
lot,
so
we
have
predefined
tags
for
the
different
operating
systems
and
tags
for
the
cloud
providers
and
then
these
two
items
over
here
actually
in
the
background,
so
there's.
B
B
It's
a
it
all,
come
miles
down
to
commands
and
files.
So
when
I
added
bash
support,
it
was
taking
those
files,
base64
encoding
them
merging
them
with
a
command
and
out
comes
a
bash
script.
That
will
do
everything
from
beginning
to
end
so
you're
on
on
Ubuntu
or
Debian.
It
will
make
sure
that
you've
got
the
GPG
and
W
get
and
the
keys
loaded
and
if
you're
on
Centaurus,
it
will
add
the
correct
repos
and
in
duty
container
D
fall
down
low
move.
B
B
So
if
we
yeah
so
that's
kind
of
my,
my
thinking
is
to
do
the
the
core
functionality
as
this
and
then
there's
just
the
one
way
of
configuring
kubernetes
in
container
D
and
the
base.
And
then
you
can
choose
how
you
do
everything
else.
You
want
to
use
Packer
or
you
want
to
use
docker,
or
you
want
to
do
some
ansible
on
top
of
that.
That,
then,
is
it's
not
doing
the
kubernetes
and
continuity
stuff
they're
doing
image
customization.
B
I
think
once
we
dial
it
back
to
just
the
call
and
then
say:
okay,
if
you
want
to
customize
here's
the
sensible
playbook
you
can
add
or
that
ansible
playbook
and
go
and
mix
and
match
and
create
your
own
thing
and
then
the
way
you
go.
But
but
the
call
remains
at
at
just
bash
because
you
you
can
do
ansible,
but
I
keep
coming
back
to
2y.
Ansible
is
a
really
bad
choice
and
that's
like
you,
have
the
sensible
dependency
hill
so
which
version
of
ansible
are
you
using
and
then
okay?
B
We
can
bundle
a
version
of
ansible,
but
then
are
we
going
to
keep
that
original
ransom
all
up
to
date
and
make
that
so
you
get
all
of
these
things
that
so
in
my
mind,
the
simplest
way
of
supporting
ansible
would
be
to
copy
the
play
books
and
the
rolls
into
the
resulting
image.
Bootstrap
ansible
install
ansible
to
minimal
state
and
run
Ansel
locally
in
that
image,
and
then
you
you
no
longer
reliant
on
on
the
from
the
client-side
ansible
installation.
C
B
That
you
can
also
use
like
a
portable
version
of
ansible,
so
I've
created
a
exactly
that
portable
ansible,
which
basically
bundles
ansible
with
all
of
Ansel's
dependencies
into
a
Deb
or
RPM,
that
you
can
then
install
and
uninstall.
So
it'll
give
you
a
specific
version
of
principle
and
specific
version
of
all
the
dependencies
that
you
can
then
just
install
run
it
using
that
nun,
ansible
version
and
then
remove
that
ansible
portable
package,
and
it
doesn't
touch
anything
else.
If,
if
the
real.
B
B
It's
it's
just
yeah,
so
I
created
this
thing
called
ansible
runner.
That
does
exactly
that.
It
builds
a
docker
image
inside
that
docker
image.
You
have
a
fixed
version
of
ansible
Python
dependencies,
a
whole
bunch
of
other
things.
You
go
ansible
run
and
it
drops
you
into
a
docker
container,
where
you
can
then
run
ansible
with
all
your
AWS
credentials,
ssh
credentials
key
agent
in
and
all
that
type
of
thing.
B
But
it
is
it,
it
works,
but
it's
a
it's
something
that
is
non-standard
and
takes
a
lot
of
maintenance
efforts
or
like
those
those
Reapers
I.
Have
they
all
of
the
ansible
and
Python
versions
are
today
they've
all
got
vulnerabilities
and
then,
if
you
go
and
update
them,
does
this
thing
still
actually
work.
B
It
becomes
a
like
a
really
major
maintenance
headache,
whereas
if
you
are
running
ansible
inside
the
actual
resulting
image,
you
can
say
do
we
want
to
install
ansible
permanently
and
if
not,
we're
going
to
use
this
specific
version,
the
specific
version
of
portable
ansible
and
then
remove
it
afterwards
and
then
there's
no
dependency
like
I,
always
want
you
to
just
release
a
single
binary.
Download
the
binary,
create
one
Yama
file
hit
run,
doesn't
matter
your
what
Linux,
what
doc?
What
AWS
could
it
and
none
of
that
should
really
matter
it
should
just
work.
C
C
B
The
problem
with
that
is
that
it
nice
at
first
but
then
far
from
nice.
So
if
you
look
at
the
packaging
abstraction
at
the
first
glance,
install
a
package
and
then
use
a
different
package
manager
depending
on
their
OS,
is
a
nice
idea,
but
when
it
comes
down
to
it,
it
doesn't
work
because
you
have
these
minor
changes
between
different
environments
in
package
names,
for
example.
B
So
what
do
you
do
then?
So
now
you
have
two
different
managing
package
names
in
two
different
places.
So
in
one
side
you're
doing
packages
that
are
common,
then
you
have
all
of
these
customizations
for
differences.
And
then
what
do
you
do
about
packages
that
are
not
operating
system
specific
but
or
cloud
specific?
So
there's
no
abstraction
of
that
in
ansible
itself,
and
I
don't
even
know
if
there's
effect
for
that
in
ansible,
so
like
what
what
config
admin
really
is
trying
to
do
is
not
is
not
rebuild
configuration
management.
B
It's
it's
express
purpose
is
to
get
you
to
a
kubernetes
node
and
to
stop
that's
it.
Everything
up
until
that
kubernetes
node.
Anything
that
doesn't
meet
that
goal
is
will
get
4je.
If
you
do
a
put
PR
on
that
will
get
rejected,
because
it's
only
designed
for
that
very
specific
purpose
and
when
we
do
it
with
that
very
very
specific
purpose,
we
can
actually
test
it
like
ansible
I,
don't
know
how
many
issues
are
like:
there's
40,000,
open
issues,
I
think
they
were
going
to
declare
bankruptcy
at
one
point
and
just
close
them
all.
B
You
have
all
these
modules
there's
like
five
thousand
modules,
but
how
many
of
them
actually
have
an
intern
test?
How
many
of
them
have
an
intern
test
across
multiple
distributions?
So
those
problems
on
cannot
be
solved
with
ansible
they're,
like
inherent
to
the
fact
that
ansible
is
so
flexible
and
easy.
It
comes
with
those
inherent
problems
and
config.
Adding
is
design
to
solve
those
problems
for
the
limited
use
case,
which
is
kubernetes
node
for
kubernetes
node.
Can
we
make
sure
that
we
can
install
kubernetes
on
these
specific
operating
systems?
B
And
we
can
confirm
that
because
we
can
test
that
we
can
test
that
against
all
of
the
different
versions,
and
we
can
test
that
in
in
CI
very
cheaply.
I
select
I,
have
test
running
in
circles,
CI
against,
like
seven
different
operating
systems
that
will
install
kubernetes,
so
I
had
it
on
Cuban
at
ease
in
docker
and
I,
just
added
the
container
T,
and
so,
and
also,
if
we're
looking
to
to
get
this
used
more
broadly.
B
B
A
B
I
write
out
or
percent
100%
and
and
like
the
feature
parity
would
need
to
come
with
the
testing
so
like
there
anytime,
I
would
recommend
using
this
and
switching
from
what's
there
already
is
when
it's
backed
by
test
back
bar
PR
based
testing.
That
will
actually
say
what
it
says
it
does
so
like.
So
the
question
is:
how
do
we
manage
both
this
and
the
existing
stuff
because
they
are
going
to
have
a
parallel
relief
release
cycles?
So
there
might
be
some
major
stuff
will
not
just
be
minor
stuff
on
the
ansible
going
forward.
B
So
that's
a
question
to
you.
So
how
would
you
like
to
kind
of
handle
this
as
a
as
an
experiment?
So
so
in
my
mind,
we
do
it
as
an
experiment
and
use
the
conflict
admin
as
the
base,
but
as
with
all
good
experiments,
if
that
experiment
fails
and
we
need
to
switch
it
out
and
use
ansible
as
the
base
like
that's,
why
you
do
an
experiment,
I'm
happy
to
to
kind
of
do
that
as
well,
are
just
not
difficult.
It
is
so
yeah.
A
I
mean
I
think
two
things
come
to
mind
as
far
as
maintaining
existing
stuff
that
isn't
slowing
down
from
my
at
all.
That's
still
gonna
happen
and
there's
lots
of
work
and
and
changes
coming
into
all
the
ansible
based
solution.
But
as
far
as
bringing
this
in
like
I
could
see
two
different
things.
One
would
be
you
know,
kind
of
like
merging
what
you
have.
It
comes
in
two
master,
but
it's
kind
of
segregated
by
by
being
in
its
own
folder,
and
it's
it's
not
really
overlapping
with
any
of
stuff.
A
That's
there
already
or
like
a
little
slightly
more
extreme
solution
would
be.
You
know
whether
we
use
more
of
a
good
approach
and
whether
it
lives
on
a
branch
or
we.
You
know
we
cut
something
that
where
it
can
live
independently
until
we
want
to
bring
it
into
master
I,
don't
know
what
people's
well.
A
A
I
do
think
this
is
you
know
we
want.
We
want
an
awesome
sold
CLI
for
this,
so
poor
choice
of
words
there,
but
what
I
mean
is
unless
we've
got
documentation.
That
is
that
you
know
at
the
very
beginning
and
telling
people
to
use
it.
Most
people
aren't
going
to
stumble
on
it
by
chance
and
start
playing
with
it.
So
I
don't
think,
there's
much
of
a
danger
of
people
trying
something
that
is
it
ready
for
primetime.
A
Sounds
like
no
I
mean
I'm,
fine,
with
it
being
in
master,
depending
on
what
the
folder
structure
is,
which
I
think
the
last
push
I
saw
on
it
was
the
last
commit
was
unless
something
happened
in
the
last
day
or
two.
It
was
a
couple
days
ago.
We
lost
Jason,
but
it
was
it
was.
It
was
off
in
its
own
area,
which
I
think
is
fine.
A
A
I
mentioned
maybe
about
a
month
ago,
though
I
don't
remember,
if
you
were,
maybe
it
was
six
weeks
ago,
it
might
have
been
the
time
that
you
were
out
much,
but
I
have
also
like
a
CLI
based
tool
that
is
OVA
specific.
That
I
want
to
start
bringing
in
here,
and
it
kind
of
replaces
a
bunch
of
the
Python
scripts
that
we
have
for
building
the
OVA
and
dealing
with
metadata
and
stuff,
like
that
and
I'm,
not
sure
whether
that
should
live
in
you
know
at
the
top
level
and
be
part
of
your
tool.
C
A
A
A
And
when
I
mentioned
this
before,
you
were
the
one
who
pointed
me
to
a
library
called,
go
OVA,
I,
think
and
I
took
a
look
at.
That
is
also
not
quite
what
I
need
and
I
could
do,
what
I
need
by
enhancing
the
Python
scripts
in
a
lot
of
ways,
but
I'd
rather
it
be
go
based
one
so
that
maybe
it
would
end
up-
and
you
know
this
uber
image
builder
CLI
also
so
that
could
potentially
but
not
start
off
as
something
that
had
some
library
functions.
That
could
be
vendored
by
external
tools,
but
I.
A
B
A
B
What
I'm,
taking
in
and
I'm
going
to
do
something
with
it.
So
if
I
remove
everything
else
and
say
I'm
gonna
take
in
a
VMDK
and
my
engine
is
going
to
be
nob,
for
example,
so
I'm
not
doing
any
configuration
so
I'll
abuse
the
existing
Packer
plus
ansible
generator
VMDK,
and
then
I
can
pipe
that
into
this
tool
and
then
do
OVA
properties
in
use.
I,
don't
know
what
else
you
need
yeah.
A
And
that's
and
that's
exactly
where
I'm
going
with
it,
in
that
the
the
functionality
that
that
I
need
that
I
don't
have
today
is
way
finer
grain
customization
of
what
goes
into
the
OVF
metadata
being
able
to
change
properties
that
are
in
there
being
able
to
override
ones
that
are
already
there
and
being
to
add
lots
of
new
ones
that
are
in
there
at
all
having
same
defaults.
If
it
doesn't
exist
but
being
able
to
customize
lots
of
stuff.
B
B
B
A
C
A
Not
configurable
but
yeah
when
you
meaning
it
there's
nothing.
You
can
do
right
now
to
change
it
without
having
to
change
some
code.
It's
it's
configurable,
but
it's
hard
coded,
so
to
speak,
but
the
this
script
that
embeds
that
EULA
takes
the
EULA
as
an
argument,
so
you
could
point
it
to
a
different
one
for
the
downstream
stuff
for
the
TKG
one.
It's
gonna
use.
What's
there
today,
it's
the
right
one,
but
for
upstream
kubernetes,
community
stuff
I
could
certainly
see
that
and
now
was
just
kind
of
fallout
from
what
was
there
to
begin
with.
A
A
B
Just
another
thing
I
want
to
point
out
is:
this
is
using
a
custom
version
of
of
go
llamo
that
supports
tip
templating
inside
yeah
Moo.
You
will
see
the
bash
bash
template
in
env,
so
when
you,
when
we
parse
this
art,
will
automatically
pull
out
environment
variables
and
let
you
run
templating
in
liners
also.
This
is,
for
example,
generating
the
ami
name
using
a
go
template
and
you
could
then
extend
that
out
to
be
so.
I
was
going
to
do
a
go
getter
one
as
well.
B
B
Like
there's
lots
of
other
templating
solutions,
but
I
actually
found
that
this
was
the
only
llamo
native
way
of
doing
it,
where
you
not
this
dy
TT
as
well,
but
I
found
that
not
very
usable
as
a
library.
So
this
is
fully
valid,
llamo
and
comments
or
comments
and
and
the
general
tags
on
your
multics.
Okay,.
B
A
B
A
Won
yeah
I,
don't
know
where
the
right
place
is,
but
but
I
would
like.
I
would
like
you
to
be
able
to
get
it
in
there,
so
we
can
start
to
do
the
iteration
start
to
shake
out
the
experimentation,
see
whether
config
Aid
and
you
know,
admin.
It's
gonna
work,
whether
or
not
we
you
know
just
just
have
a
place
where
we
can
start
getting
the.
B
B
A
B
A
B
A
A
That's
that
to
the
other
things
that
that
Python
script
isn't
for
is
it
does
assume
that
the
VMDK
came
from
the
Packer
build.
So
it's
looking
for
some
metadata
that
exists
in
a
JSON
file
that
Packer
spits
out,
but
that
can
be
that
can
get
there
any
way
you
want
it
to
get
there
to
unblock
you.
It's
it's
just
metadata,
but
okay,
but
yeah
I
mean
I.
I
want
to
be
able
to
iterate
on
this.
So
so.
B
A
A
B
B
A
D
D
A
D
B
B
B
D
A
Okay,
great,
if
at
all
possible,
I'd
like
to
get
out
of
here
five
or
six
minutes
early,
because
I
haven't
have
to
gotta
go
pick
up.
My
kids,
but
I
have
one
other
thing
on
the
agenda
that
I'd
like
to
ask
you
about
motion
which
was
basically
Justin's
PR
about
his
container
base,
build.
He
made
the
changes
that
we
discussed
about
a
month
ago,
while
Tim
flies.
It's
been
four
weeks.
I
just
want
to
see
if
you
had
in
any
objections
to
that
merging.
B
A
A
A
D
My
air-gap
thing
I
it's
done
I
just
haven't
tested
it
with
with
non
air
gapped
fields
right
just
to
make
sure
that
none
of
those
variables
being
in
their
brakes.
Anything
but
I
read
a
stit
and-
and
the
only
other
thing
is
that
one
of
the
things
in
that
pull
request
is
it
starts
out.
Putting
a
packer
manifests
in
JSON
format.