►
From YouTube: 20191009 - Image Builder Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Wednesday
October
9th
edition
of
the
image
builder
office
hours,
a
sub-project
of
state
cluster
lifecycle,
just
a
reminder
that
this
meeting
does
adhere
to
the
kubernetes
code
of
conduct.
So
in
general
please
be
excellent
to
one
another.
We
have
a
relatively
short
agenda
today,
and
this
is
our
first
meeting.
A
So
if
there
are
any
topics
that
you
want
to
discuss,
please
go
ahead
and
add
them
to
the
agenda
document
and
I
can
go
ahead
and
link
that
in
the
chat
also,
if
you're
here
feel
free
to
add
yourself
to
the
attendee
list
as
well.
Alright,
with
that,
the
first
item
that
we
have
is
Moshe
discussing
goals,
non
goals
and
roadmap.
B
B
C
A
A
Ideally,
we'd
have
the
same
tool
that
we
can
use
for
kind
of
like
default
images
that
can
be
used
for
demo
or
P&C
use
cases
and
easily
use
that
we
publish
through
the
community
but
also
have
a
the
same
process
that
people
could
use
to
build
and
maintain
their
own
images
for
their
own
environment
so
wherever
they
need
to
worry
about
any
type
of
security,
compliance
or
anything
like
that,
they
should
be
able
to
use
the
same.
The
same
tool
that
we're
using
upstream
to
be
able
to
produce
those
images
track.
D
You
I
guess:
I
can
oh
yeah
yeah
I
think
what
I
would
like
is.
D
There
are
a
lot
of
steps
that
I
think
are
common
to
you
know,
or
to
users
that
are
not
building
images,
but
maybe
our
provisioning
and
already
you
know,
running
operating
system
with
the
dependencies
that
they
need
to
get
kubernetes
up
and
running.
So
it
would
be.
It
would
be
great
if
that
functionality
in
this
tool
were
also
accessible
to
to
you
know
to
somebody
so
I
can
you
know.
I
can
use
this
tool
to
run
on
a
provision.
E
E
Go
ahead,
no
did
I
think
it's
a
time
delay
that
always
gets
me
yeah.
So
until
recently
I
was
in
their
field
engineering
organization.
We
immerses
customer-facing,
so
just
want
to
make
sure
to
go
along
that
we
have
the
white
cuddles
and
customizability
in
its
usable,
pile
and
Jesus
generally,
but
just
repeating
what
everyone
else
says.
Otherwise,.
F
B
Does
anybody
have
any
strong
opinions
against
anybody's
use
cases,
so
we
all
come
on
the
same
page,
the
land
and
going
in
the
same
direction.
So
if
I
run
here
in
the
in
the
know,
I'm
gonna
assume
assume
that
we
are
in
the
same
direction
and
maybe
the
next
step
is
to
just
create
a
PR
on
the
readme
and
kind
of
a
plan
needs
to
to
keep
everyone
going
forward.
B
G
B
A
Yeah
so
I
think
there's
a
few
different
challenges
that
we
have.
One
of
them
is
is
a
special.
You
know:
how
do
you
consume
these
images
from
a
tool
like
cluster
API
or
some
other
project,
and
you
know
if
there
is
a
precondition
for
even
the
demo
or
PRC
use
case
that
you
have
to
create
images
and
publish
them
somewhere
before
you
can
get
started?
That's
a
pretty
heavy
requirement
for
users
who
are
just
trying
to
kick
the
tires
with
with
your
project.
A
C
Will
say
that
I
agree
as
far
as
versioning
goes.
Oh
I
agree
with
motion
said
I'd
like
to
see
both.
We
know
the
images
and
the
tool
itself
versioned,
because
that
has
that
has
come
up
on
my
side,
where
you
know
we
wanted
to
produce
new
images
for,
say
the
same
kubernetes
version,
but
some
things
within
the
cooling
has
changed
and
it
wasn't
necessarily
easy
to
identify
okay.
C
What
version
of
the
tool
was
this
build
with,
because
they
do
produce
different
inputs
for
very
good
reasons,
and
so
that's
how
we
do
that
is
of
interest
to
me,
especially
since
I
started
thinking
about
it
from
the
get
perspective
and
tagging
and
right
now.
This
repo
has
multiple
projects
in
it.
There's
kind
of
the
Cappy
image
generator.
There's
the
tooling
the
demotius
contributed,
and
we
may
not
want
to
tag
at
the
same
time
so
that
that
creates
some
friction
there
about
how
to
version
those
changes
in
a
way
that
we
all
agree
on.
B
To
support
images
due
to
Seabees
coming
out
understand
why
other
people,
but
is
that
something
that
maybe
this
project
can
handle,
is
actually
providing
some
limited
security
support
for
images
so
that
we
can
kind
of
level
up
the
playing
fields
for
everyone
to
say,
use
these
images,
consume,
updates
and
we'll
do
our
very
best
to
give
you
in
fact
not
defect
free
but
see.
The
e
free
images.
A
A
A
In
addition,
not
all
of
the
cloud
providers
really
make
it
easy
to
publish
images,
which
also
means
that
we
would
also
have
to
work
on
getting
images
into
their
respective
marketplaces
and
all
of
that
stuff
too
so
well,
I
think
it's
a
nice
to
have
to
have
the
public
images.
I
am
Not
sure
without
more
automation
and
and
buy-in
from
the
cloud
providers
themselves,
how
feasible
it
is
to
really
do.
A
D
D
A
I
can
say
right
now
with
what
I
did
today
for
pushing
the
cluster
API
images
for
the
AWS
provider.
I
generally
try
to
relatively
fast
follow
the
kubernetes
releases
and
with
doing
that,
it
normally
takes
about
2
to
3
hours.
Out
of
my
day
between
creating
the
images
you
know
dealing
with
any
potential
breakage
in
the
tooling
and
then
testing
those
images
would
just
like
a
simple
cluster.
You
know
deploying
a
simple
cluster.
A
A
So
it's
at
least
today.
It's
not
a
trivial
amount
of
time
to
publish
and
track
the
work
for
those,
but
if
we
do,
somebody
that
is
willing
to
do
that,
I
would
definitely
be
interested
in
helping
working
with
them
to
you
know
want
to
understand
the
process
that
we've
been
going
through
so
far,
but
also
other
potential
hazards
with
the
process
as
well.
We're
trying
to
track
more
than
just
the
kubernetes
releases.
G
See
hi
so
so.
F
One
thing
that
I'm
trying
to
see
is:
we
are
in
this
project.
We
are
building
images
right,
but
then
these
images
have
specific
versions
of
cubelet
and
cue
medium
and
keep
CTL
that
goes
into
them,
trying
to
see
if
there
is
a
way
to
speed
things
up
so
that
we
are
able
to
test
CAD
G,
for
example,
easily
without
having
to
build
everything
all
over
again.
F
That
was
one
aspect
of
it.
The
other
aspect
is
how
do
we
test
is
say
there
is
a
PRN
KK,
and
we
want
to
test
that
with
the
rest
of
everything
else,
like
cluster
API
can
G
how
do
I
easily
build
a
cube,
API
server,
for
example,
and
use
the
custom
cube
API
server
and
bring
bring
up
cab
G
with
it.
So
that
was
the
other
variation,
so
one
is
the
pre-built
packages
that
goes
into
the
image
builder.
F
B
F
Yes,
so
just
to
give
you
a
little
bit
more
background.
Over
the
last
couple
of
weeks,
we
stood
up
end-to-end
testing
for
cab
G,
so
we
have
two
variations:
a
one
variation
as
we
use
a
cap,
cluster
API
and
cab
PK,
which
are
pinned
to
specific
known
versions
that
work
and
the
other
one
is
both
Kathy
and
cab
BK
at
master.
So
those
are
the
two
variations
that
we
currently
test.
This
is
on
desperate.
If
you
can't
I,
just
tell
me
and
I'll
send
your
legs.
F
So
those
and
those
are
working,
fine,
they
are
green
and
they
show
you
know
when
there
is
a
problem
with
chappie
and
we
can.
It
goes
red
and
then
we
can
go
fix
whatever
was
broken
right
and
it
captures
it
as
information
for
us.
But
what
I
haven't
been
able
to
test
is
how
do
I
test
things
that
are
say:
I
want
to
use.
F
Q
monitors
the
artifacts
from
CI
cross,
which
gets
updated
every
few
hours,
how
I'm
test
those
kinds
of
things,
and
while
looking
into
that
and
talking
to
Jason,
we
realize
that
there
are
two
things
we
still
have
to
somehow
manipulate
in
the
CI
jobs,
and
one
was
the
DPN
packages
that
goes
into
image.
Builder
can
be
positive.
We
can
build
the
Debian
packages
and
build
use,
use
the
image
builder
to
build
images
just
for
that
single
run,
but
then
it
just
takes
longer.
So
that
was
one
the
other
one
was
the
container
images.
B
C
Know
I
kind
of
started
that
and
then
I
got
pulled
into
some
specific
new
work
priorities
and
I
kind
of
left
it
for
the
last
couple
of
weeks,
so
I
haven't
commented
on
it
in
a
while,
but
but
yeah
I
mean
kind
of
to
your
point.
Dimms
yeah,
like
I'm,
not
it's
hard
for
me
to
wrap
my
head
around
a
rapidly
changing
CI
process
or
being
able
to
test
individual
PRS
when,
at
least
from
my
perspective,
you
know
this
tool
is
producing
very
heavyweight.
Images
like
it
takes
a
long
time
to
build.
C
But
since
the
like,
the
cluster
API
image
process
creates
a
whole
VM
at
the
end,
whether
it's
hosted
in
a
cloud
provider
or
it's
a
you
know,
a
disk
image
that
you
download
and
use
they're
big,
because
their
entire
LS
images
so
I'm
having
just
creating
one
to
test.
A
small
change
is
really
complicated
right
now
and
like
is
it
a
very
heavyweight
process?
F
So
yeah
we
we
can
first
tackle
the
post
summer,
jobs
where
we
are
not
thinking.
We
are
running
something
every
few
hours
right,
six
hours,
let's
say
three
hours
or
six
hours.
So
in
that
case,
then
we
can
easily
build
the
VM
images
themselves,
but
then
we
still
have
the
problem
of
container
images.
How
do
we?
F
C
The
next
problem-
I
guess
I-
think
at
least
for
the
cluster
API
images
I,
think
that
is
being
worked
on,
because
VMware
has
some
needs
there
to
be
able
to
use
container
images
that
are
not
the
upstream
ones
and
pull
them
into
the
image.
So
I
think
it's
happening
anyway
and
I
kind
of
alluded
to
that
before
cuz
it
was,
it
was
hard
like
I
didn't
gum,
so
the
cube
ATM
doesn't
want
to
pull
those
by
default
and
there's
some
issues
there,
but
I
think
they've
been
being
worked
out.
C
The
main
person
working
on
it
right
now
is
not
on
this
call,
but
he's
on
my
team,
one
of
my
co-workers
and
yours,
so
it's
sweet
I
mean
he's
mostly
doing
that.
Okay,
so
he's
the
one
who
I
think
would
know
the
most
about
pulling
in
custom
container
images
into
the
image
stamping
tool
for
cluster
API,
perfect.
C
Yeah,
you
know
so
from
the
cluster
API
side,
there's,
basically
a
variant
for
each
cloud
provider
and
so
cat
V
consumes
OVA.
So
that's
a
you
know
a
single
VM
image
on
AWS.
It
creates
a
Mis
things
that
end
up
hosted
within
the
cloud
provider.
So
you
end
up
with
something
hosted
within
GCE,
AWS
yeah.
So
there's
VM
images.
H
E
C
I
would
agree
with
that.
There's
right
now,
there's
nothing
in
the
Cathy
image
generators
that
does
that
you're
generating
only
disk
whole
new
VM
image.
Every
time
the
the
cluster,
the
the
cafe
provider
starts
with
an
existing
ami,
so
it's
starting
with
an
existing
image
with
an
OS
on
it
and
then
customizing
it
and
creating
you
am
I,
but
the
cat
V
generator
starts
from
an
ISO.
C
It's
actually
on
my
list.
May
now
one
of
the
things
I'm
doing
this
to
break
that
into
two
stages,
because
the
image
that
we
create
after
you
do,
the
ISO
installed
is
identical
each
time
almost
that's
why
I
haven't
split
it
up,
but
it's
about
to
be
where
it's
kind
of
a
one-to-one
between
like
and
ISO
and
an
image
should
never
change
once
that's
in
place,
it'll
be
a
two-step
process,
so
you
don't
have
to
do
the
ISO
install
every
time,
but
right
now
you
do
and
that's
one
of
the
most
time-consuming
parts
I.
B
G
B
A
So,
for
example,
for
AWS
the
abun
two
images
they
they
use
an
AWS
oriented
kernel
rather
than
the
standard
of
onto
kernel.
I
want
to
make
sure,
like
one
of
the
reasons
why
we
initially
went
with
the
Packer
out
and
for
the
AWS
one
using
the
upstream
images
as
a
base
was
to
kind
of
get
those
optimizations
and
configurations
built
into
where
we're
not
having
to
kind
of
track.
Those
and
maintain
those
separately
and
and
I
worry
a
bit
about
the
maintenance
cost
of
trying
to
chase
those
soo.
B
I
think
we
can
support
both
options.
So
once
once
you
have
a
the
basic
approach
of
holding
an
image,
we
can
start
with
a
disk
image
or
we
can
start
with
enhancer
and
instead
of
going
the
full-blown
pata
route,
which
is
to
create
security
groups
and
SSH
keys,
and
it's
to
section
into
the
instance,
which
creates
a
whole
bunch
problems.
E
A
I
think
part
of
the
issue
is,
is
that
the
process
and
the
configurations
that
feed
the
base
images
that
the
different
distributions
publish
to
different
cloud
providers
is
not
a
transparent
process.
So
we
don't
necessarily
have
insight
into
what
what
they
are
doing
differently
versus
the
kind
of
generic
cloud
images
that
they're
publishing
I.
H
F
We
haven't
done
that
yet
right
now,
the
only
thing
that
we
are
able
to
use
is
you
know
what
we
currently
try
right
once
it's
seen
zero
and
once
it's
in
I,
don't
know
if
116
so
things
that
are
already
published
into
the
GCR
repository
right
and
things
that
are
already
available.
That's
why
I
was
trying
to
look
into
what
are
the
images
that
we
build
in
the
CI
cross,
equivalent
job
I
think
they
change
the
name
of
it,
so
they
used
to
be
a
job
that
runs
the
cross
here.
F
The
overall
effort
is:
how
do
we
get
rid
of
flash
cluster
right
and
for
to
get
rid
of
slash
cluster?
We
need
something
equivalent
and
just
at
the
closest
something
equivalent
to
that.
The
existing
cube
up
script
is
cap,
G
right
and
then,
if
you
have
to
do
cap
G,
then
how
do
we
test
latest?
You
know
changes
and
KK
that
that's
the
problem
statement
how
we
ended
up
there.
F
Yeah
I'm
not
worried
about
that
right
now
and
I
just
want
a
general
solution
which
could
be,
which
may
not
even
use
Google
Cloud.
To
be
honest,
but
we'll
start
there,
because
that's
where
most
of
our
infrastructure
is
right
now.
So
if,
if
you
can
do
open
to
that's
fine,
we'll
start
there
and
then
we
will
see
how
we
can
extend
that
to
cause
images.
H
E
Just
briefly,
I
did
have
some
feedback
from
Asher.
Last
week
we
came
up
and
discussion
around
machine
tools
that
just
more
like
for
you,
information
about
how
they
consume
images
for
aks
and
how
they
continued
to
plan.
To
do
so.
Is
they
see
pre-baked
images
as
an
optimization
so
to
speed
in
upbeat,
but
they
do
also
do
things
like
crop
vixx's
are
done
during
boots,
so
double
replace.
That
would
do
like
an
update
during
boot
to
get
latest
bits
and
then
eventually
publish
a
new
image.