►
From YouTube: Kubernetes Kops Office Hours 20180831
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I,
hello,
everyone:
this
is
cops
office
hours
today
is
August
31st.
We
do
have
a
couple
of
things
on
the
agenda,
which
is
moving
around
very
rapidly
I'm,
just
watching
what's
going
on,
and
if
you
do
have
things
you
would
like
to
put
on
the
agenda,
then
do
feel
free
to
add
them
on
there.
Otherwise
I
suggest
we
just
get
started
first
on
the
agenda.
Is
the
Joseph
Stephens
talking
about
or
discussing
disk
management
strategy?
B
So
I've
been
dealing
with
some
problems
recently,
trying
to
upgrade
from
the
from
m4
to
our
five
instances
and
obviously
what's
come
up
in
that
transition.
Is
the
the
nvme
type
discs
that
the
archives
are
using
and
obviously
that's.
You
know
consistent
across
the
number
of
different
new
instance
types,
and
it's
resulted
in
a
few
problems.
B
The
first
big
one
was
actually
that
partitioning
of
the
roots
of
Vice
is
happening
differently
with
nvme
based
devices
where,
rather
than
so,
if
I
give
it
a
device
in
DBS
fiber
to
gig
root
on
I'm,
for
it
just
gets
that
entire
device
on
r5
it
actually
partitions
an
8
gigabyte
partition,
mounts
that
and
excludes
the
rest
which
I'm
guessing
you've
run
into.
Yes,.
A
I
just
give
you
a
bit
of
background
on
what's
going
on
there.
What
happens
is
the
ami
itself
starts
off
as
an
8
gig
image,
and
this
is
sort
of
like
general
to
how
a
Davis
works.
So
the
ami
has
a
sort
of
fixed
size
and
then
just
gets
written
into
the
partition
table,
and
then
it's
the
job
of
the
image
itself
to
expand
to
fill
the
space
or
not
fill
the
space,
as
you
would
like
to,
but
to
fill
the
space.
A
The
behavior
is
supposed
to
be
that
it
should
detect
the
full
disk
size
and
fill
up
into
it,
but
I'm
guessing
that
whatever
script
well,
we
run
script
to
do
that,
it's
not
one
that
I
wrote,
I,
think
it's
sort
of
part
of
standard,
Debian,
I!
Guess,
if
you're,
using
a
particular
image-
and
my
guess
is
it
just
hasn't-
been
updated
for
hasn't
either
it
hasn't
updated
upstream
or
we
haven't
taken
the
update
into
that
image
to
do
that.
A
C
B
Right
sorry,
a
big
expanding
is
the
expected
behavior,
so
I
updated
all
the
way
to
the
most
recent
stretch,
image
from
Kate's,
1/8,
I,
could've,
told
running
1/8,
and
it's
it's
not
solved
there,
but
it
did
actually
solve
another
problem
that
I
have
been
having
which
gets
into
raid.
So
the
particularly
use
case
I'm
trying
to
solve
for
is
we're
running
a
spark
with
kubernetes
and
we're
running
it
in
a
very
particular
way,
where
we're
breaking
down
these.
B
The
rage
configuration
at
boot
with
the
two
local
MP
ME
drives,
which
was
actually
fairly
straightforward
and
I
think,
would
be
a
good
thing
to
support
as
a
native
cops
option
as
it's
been
expressed
in
quite
a
long
ticket
thread.
An
interest
in
ephemeral
drive
support,
but
also
supporting
things
like
having
a
separate
cluster
of
instances
to
run
your
ETD
events,
cluster
running
running
those
on
a
ephemeral
disk.
B
A
A
Yes,
right,
I,
don't
really
know
what
to
offer
as
way
of
advice
or
explanation
of
anyone
else
wants
to
chime
in
I
mean
we
can
certainly
do
better
and
start
to
protect
them.
I,
don't
think
we
have
a
like
I.
Think
the
listen
you've
identified
is
the
right
list
right
that
we
need
to
like
look
at
rage.
Look
it
you're,
gonna,
have
AI
ops
right
not
to
space
or.
B
D
B
D
D
So
if
you
have
burst
you
can
just
ignore
them
or
a
blight
them
for
5
seconds
or
something
like
ridiculously
low
and
of
course,
you
can
put
it
CDA
like
outside
of
the
masters,
for
example,
I'm,
not
sure
if
you
already
do,
but
it
could
help
at
least
with
the
load
on
the
masternodes
in
this
kind
of
stuff.
So
this
is
things
you
can
do.
Gotcha.
B
Yeah
we're
not
currently
doing
that
I
had
obviously
like
I
had
stumbled
into
that
in
my
research
and
a
part
of
what
I
wanted
to
get
today
with
what
sort
of
roadmap
cops
might
have
around
that,
and
whether
whether
I
should
be
trying
to
drive
help
to
drive
across
that
direction
and
volunteer.
My
time
to
to
you
know,
work
on
these
different,
abstractions
or
or
if
I
should
just
diverge
and
try
to
build
things
and
in
the
different
way,
I.
A
A
Architectural
limits
right
depending
on
like
there
is
a
I,
don't
know
what
the
number
is,
but
you
know
like
there
is
a
number
of
pods
you
can
schedule
for
a
minute
or
each
each
pod
you
schedule
would
like
have
an
impact
on,
as
you
say,
events
and
all
these
sort
of
things
and
you're
likely
hitting
some
of
those
sort
of
more
fundamental
limits.
I,
don't
I
like
the
idea
of
compacting
more
often
so
what
happens
with
that?
Cd
is
if
it
could
be,
it
stores
everything
forever
and
then
kubernetes.
A
Every
five
minutes
I
think
sends
a
compact,
I
think,
which
is
basically
like
erase
everything
over
five
minutes,
I
think-
and
so
you
could
also
just
and
about
my
fame
in
sector
before
something
to
compact,
those
more
often
I
like
I,
like
the
suggestion
have
nuts
not
recording
them
at
all.
That's
it's
quite
good.
It
feels
a
little
aggressive
but
I'm
sure
the
I
think
I.
Think
if
you,
if
we
have
like
an
ass,
a
cig
about
big
data
type
things,
but
it
might
be
interesting
to
try
to
you
know
like
use
fewer
pods.
A
If
you
can
I
think
that
would
certainly
you
know
get
probably
the
overhead
of
scheduling
and
all
these
things
is
quite
high.
I,
don't
know.
If
that's
you
know
it'll
possible,
but
I
would
I
definitely
appreciate
the
issues
you
phrasing,
it's
great,
that
you
know
to
push
the
limits
and
find
out
where
they
are
and
I
think.
Also
we
don't
I
think
we
have
a
bunch
of
local
stores.
You
don't
even
use
it
all
today
on
some
instance,
types
would
be
another
issue
yeah.
This
is
a.
A
E
One
experiment
that
you
might
be
able
to
run
is
leveraging
the.
If
you
use
cops
to
output
terraform
code,
you
could
then
read.
Some
of
the
volume
mounts
to
leverage
instance
profile,
the
instant
storage
SSDs.
Instead
of
that
that
definitely
an
exercise
for
the
reader,
but
it's
an
opportunity
to
get
much
higher
I
ops
for
a
given
problem.
B
Yeah,
that's
currently
what
I'm
doing
so
and
it
is
it's
workable.
The
problem
is
I
basically
need
to
be
able
to
to
stamp
out
this
same
infrastructure
in
an
automatically
manageable
way
like
a
few
hundred
times,
and
at
some
point,
the
manual
tinkering
becomes
just
becomes
unworkable.
As
far
as
I
was
hoping,
there
was
a
there
was
a
good
way
to
build
it
into
cops
like
if
we
had
a
you
know
a
mapping
of
this.
B
A
I
think
that
would
be.
That
would
be
great
it's
a
little
because
we
have
our
ongoing
transition.
It'll
come
we
get
specifically
for
EDD,
because
we
have
the
ongoing
transition
to
at
CD
manager,
which
should
enable
like
Etsy
d,
better
SP
management,
in
particular
upgrades
and
in
theory,
downgrades,
but
definitely
upgrades
it
like
the
primary
use
case,
but
it'll
be
great
to
get
that
in
there
and
make
sure
we
have
it.
A
B
B
That
out
on
you
is
that
all
is
that
your
four
topics
there,
or
is
that
or
they
pretty
much
so
it
it's
really.
The
overarching
thing
is
like
I'm
trying
to
figure
out
how
to
make
the
Mike
my
cluster
very
performant,
because
I'd
like
to
scale
you
know
rapidly
to
2500
nodes,
and
you
know
several
thousand
pods
all
at
once
and
that's
currently
not
Bible.
B
We
actually,
we
played
with
the
garbage
collection
and
scaled
it
from
a
hundreds,
deep
terminated
nodes
to
the
you
know,
12,500
default
and
that
brought
down
the
main
Etsy
D,
which
wasn't
great
so
I'm,
trying
to
figure
out
where
the
limits
are
on
this
and
don't
see
that
well,
the
other
one
is
like
the
API
response.
Abyssum
the
cluster
decreases
pretty
rapidly
as
they're
hitting
with
a
lot
of
traffic.
Yes,.
A
I
think
I
think
I
just
think
that
those
sort
of
numbers
that
you're
talking
about
like
2500
notes
is
definitely
pushing
the
boundaries
and
there
is,
there
will
definitely
be
other
things
you
will
hit
at
that
sort
of
scale
or
even
half
that
scale
and
I.
Think
I,
don't
know
if
you
hang
out
in
the
sixth
scale
ability
group
and
like
talk
to
them.
A
It
was
them
those
people
about
like
some
of
the
like
requirements
that
they've
had
because
I
know
that
people
have
run
big
clusters,
but
I
think
they
currently
end
up
like
running
with
ridiculously
large
master
nodes,
for
example,
or
very
large.
From
that
master
nodes.
I
should
say
right,
mulch,
like
you
know,
correspondingly
large
master
nodes
just
because
otherwise,
yes,
your
API
responsiveness,
just
isn't
there
and,
as
I
understand
the
design
most
of
the
most
of
the
data
will
end
up
being
cached
in
the
API
server.
A
A
A
Six
capabilities
there
yeah
not
sure
how
active
it
it
used
to
be
much
more
active,
but
certainly
the
people
that
know
like
how
to
get
to
that
sort
of
scale
will
be
in
that
safe.
We're
gonna
point
you
to
whatever
they
have
cool
awesome,
yeah
dude
file
like
issues
about
like,
especially
if
you
have
things
we
can
do
better.
That's
wonderful,
but
yes,
I
would
definitely
be
aware
that
you
are
at
the
at
the
forefront,
cool
lucky.
D
Maybe
another
couple
of
tips:
I
can
do
because
now
I
remember
when
I
actually
had
to
do
something
similar
run
away
again,
run
away
from
it.
Cd
2,
because
there
was
an
old
bug
with
both
DB,
which
is
the
database
behind
the
scenes,
which
would
just
I
think
start
allocating
a
lot
of
memory
and
it
would
go
in
a
protected
state
where
rights
are
not
allowed
anymore.
D
I
mean
this
happened
a
bunch
of
times
to
me
and
it
was
a
bug
that
was
there
in
version
2
and
the
early
version
3,
but
it's
fixed
since
quite
a
while.
Now.
So
if
you
get
a
kinda
recent
version
3
this
is
gone,
and
this
was
a
huge
problem,
at
least
for
me.
So
when
I
was
handling,
this
kind
of
big
cluster
I
mean
we
talked
about
I,
don't
know:
30
40
thousand
pods,
not
nodes,
but
the
pods
were
I,
don't
know
2
gig
of
ram
each.
So
quite
quite
big
stuff.
D
I
was
essentially
I
mean
the
cluster
alone
would
not
survive.
There
was
me
behind
the
scenes
trying
to
save
a
TB
from
dying,
and
so
this.
This
is
why
this
was
living
because
it
could
survive
alone.
So
this
one
tip
and
the
other
one
is.
If
you
have
limits
like
on
the
API
server.
Yes,
I
was
running
essentially
but
yeah.
If
you
have
limits
like
on
the
I,
mean
memory
or
CPU
on
the
API
server.
D
A
That's
actually
good
point,
I,
think
I
think
we
don't
I,
think
we
limit
our
API
server
to
only
user
or
I.
Think
we
use
every
limit
everything
on
the
master
to
only
use
a
core,
so
in
total,
so
we
might
actually
be
throttling
pretty
every
I.
Don't
know,
that's
definitely
something
to
check
on
another
one.
That
would
be
easier
to
remove
that.
B
A
A
For
other
components,
so
for
API
server
cube
controller
manager,
scheduler
they
are
run
as
static
manifests
in
Etsy.
Trinities
manifests
for
its
evening
itself
there
that
sort
of
get
sorry
that
gets
managed
by
protic,
u'b
or
in
future
EDD
manager,
which
is
basically
trying
to
like
give
us
CD
sort
of
automated
recovery
using
EBS
volumes,
sort
of
tracking
where
the
EBS
volumes
go.
A
So
that's
why
you
see
that's
so
if
you
want
that's,
why
you
see
it
sort
of
behaving
like
Bing
Bing
reset,
because
it's
sort
of
preparing
itself,
but
if
you
want
to,
if
you
want
to
change
that,
we
have
to
map
it
in
through
XV
manager,
but
hopefully
we
just
are
only
setting
requests
and
not
limits
anyway.
So
in
other
words,
it's
sort
of
everyone
gets
a
best
efforts,
type
scheduling
but
Lee
yeah,
I,
hope,
gotcha
cool.
A
D
Found
that
ok,
so
yeah
I'm
not
running
a
CD
manager
yet
and
I'm,
not
sure
if
the
do
the
trick,
but
I
think
is
like
f
this
a
couple
of
clusters
that
I
have
which,
with
a
pretty
standard
cop
setup
and
I,
don't
have
the
encryption,
enabled
scription
arrest
for
the
volumes
and
I
would
love
to
turn
them
turn
this
on.
But
the
the
documentation
says
you
need
a
new
cluster,
essentially
which
I
can
do,
but
I'm
wondering
if
it's
actually
possible
to
do
without
getting
a
new
cluster
and
yeah.
So.
D
A
A
D
A
Yes,
if
so
copies
the
tags,
so
if
were
to,
if
you
were
to
like
seamlessly
swap
in
the
correct
volumes
like
it,
scrubs
off
in
new
volumes
with
the
same
tags
that
were
encrypted
under
the
scenes,
and
you
copied
the
files
over
or
restore
from
a
snapshot,
I
guess
that
would
be
okay,
it's
pretty
risky
I
would
I
would
I
you
I,
that's
where
I
think
the
other.
Another
thing
on
my
list
is:
is
adding
the
functional
unity
managers
to
actually
restore
so
I'd
city
manager
includes
a
backup
and
actually
I.
A
A
D
Alright
I
mean
I
of
Tyrael.
I
can
try
it
out.
I
just
wanted
to
know.
If
someone
already
did
that
because,
for
example,
for
backups
I
mean
it's
not
the
perfect
solution,
but
it
applies
now
as
this
EBS
volumes
lifecycle
manager,
I'm,
not
sure.
If
you
have
a
look,
so
they
do
aromatic
snapshot
for
all
EBS
volumes,
essentially
every
12
hours
or
so,
which
is
not
great
but
good
enough
to
restore
in
case
someone
really
blows
up
and
you
don't
need
anything
running
it
just
state.
D
A
A
D
Okay,
that
works.
Alright,
then
I
go
with
the
next
such
that
we
save
a
bunch
of
a
little
bit
of
time.
So
I
guess.
Last
time
we
discussed
the
road
map
for
the
next
four
111
112
like
that,
and
we
talked
about
add-on
and
I
was
trying
to
have
a
look
at
the
open
issues
and
I'm
kinda
a
little
bit
confused
of
what
the
current
status
of
add-on
is,
because
there
is
this
channel
tool
which
is
kinda
hidden
and
then
the
add-ons
there
go
in
the
manifest
that
gets
applied.
D
So
I'm,
not
sure
if
someone
can
summarizes
like
what
would
be
the
work
to
do,
because
currently
yeah
I'm,
not
using
at
all
add-ons
I,
have
my
manifest.
That
did,
let's
say,
complement
the
cops
installation
and
I
just
apply
them
as
keep
CDL
apply
in
with
a
tool
that
I
have
and
stuff
like
that.
But
it
would
be
great
if
I
could
just
integrate
this
with
the
normal
course
management.
A
Yes,
I
am
I'm,
this
is
actually
I
am
looking
working
on
and
like
Google
right
now
is
trying
to
figure
out.
The
strategy
for
add-ons
sort
of
in
general.
I
am
sadly
unable
to
share
the
findings
right
now.
I,
don't
think
they're
incredibly
surprising
to
anyone
but
I'm
just
waiting.
Is
it
like
for
the
it
while
working
through
the
approvals
on
all
of
that
I?
Don't
think
anyone
will
be
I
mean.
A
How
we
don't
sort
of
it
now
force
to
be
up
to
everyone's
clusters
when
we
like
to
push
a
new
version
to
the
channel,
because
you
also
want
some
control
over
that.
So
today,
we're
just
pushing
something
into
the
like
pushing
a
new
image
into
the
channel,
doesn't
automatically
update
everyone
or
does
even
on
a
cups
up
that
you
have
to
do
a
cups
upgrade
to
sort
of
pick
that
up
automatically
any
new
clusters
get
it.
If
you
don't
specify
it
changes
the
defaults,
but
it
doesn't
change
existing
clusters.
D
Yeah,
better
right
because,
like
if
you
have
add-ons
I
assume
there
should
be
also
some
tests
integrated
test
with
the
current
version.
So
maybe
there
might
be
some
inconsistencies.
The
thing
is
like
currently,
what
I
have
is
really
so
I
know
that
it
works
because
I
kind
of
know
the
this
stuff
I'm
installing
I
mean
the
ecosystem
that
there
is
around.
But
it's
really
it's
really
on
me
right.
Knowing
it
and
I
guess
also
for
users
of
cops,
it
would
be
cool
to
say,
I,
don't
know
yeah,
don't
the
does.
C
A
A
A
Yeah
I
mean
it's.
Certainly
it's
certainly
a
big
thing
for
cops
like
we
feel
the
pain
quite
badly
because,
like
today
we
do
have
some
add-ons
that
are
managed
and
they
are
baked
like
calico
or
leave
and
lanolin
eat
and
coordinates
and
they
are
baked
into
the
into
cop.
So
whenever
a
new
version,
whenever
we
see
a
new
update,
we
have
to,
in
theory
a
big
cops
and
try
to
get
it
out
to
the
to
the
user
base.
A
And
you
know
we
have
a
mechanism
for
forcing
an
update,
but
that's
pretty
aggressive
and
we
have
a
sort
of
softer
mechanism
to
tell
you
that
a
new
version
is
available.
But
it's
not
it's
pretty
convoluted
way
of
like
doing
a
version
bump
on
calico
right
doing
a
whole
new
release
of
is
slightly
overkill.
For
that.
A
Yeah
I
I'm
hoping
to
share
something
soon,
which
I
don't
think,
will
surprise
anyone.
So
that's
it
shouldn't
it
shouldn't
be
too
much
too
much
more
and
if
I
know
this
is
not
a
anyone
else
is
also
free
to
to
work
on
this
and
propose
things
like
because
I
but
yeah
and
if
it
wants
to
me
privately
I,
can
share
some
more
stuff
in
detail.
A
If
anyone
has
any
working
progress,
it
they're
thinking
about
it,
listen
like
that
Elena
I
think
you're
going
to
bump
a
couple
of
PRS,
which
is
great
and
I,
owe
ya.
One
of
them.
I
know
about
one
of
them:
I!
Don't
know!
Oh
yes,
I
do
yes,
yes,
I
do
the
other
one.
Yes,
the
other.
Yes,
sorry,
cool.
C
No
I,
just
we
talked
about
this,
maybe
like
a
month
ago,
I'm
six
weeks
ago
or
something
so
I,
don't
know.
It's
I
think
the
one
I'm
mostly
interested
in
is
despondence
is
a
new
cloud
provider.
I
would
love
to
be
able
to
get
that
merged
in
so
I
can
use
newer
features
that
I'm
interested
in
a
newer
version
is
to
get
rid
of
me,
so
that's
kind
of
where
I'm
at
I'm
just
gonna,
like
two
versions
behind
great
now
and
I'd
like
to
meet
you
in
the
future.
Okay,.
A
Cool
yes
and
I
will
I
was
certainly
yeah
dig
into
that
figure
out
what
what
we
need,
my
as
you
say
like
communicate
on
that
PRM
and
let's
forums,
I,
think
it's
buttons.
That's
that's
making
that
PR
right
and
then
the
damn
mark,
but
then
that's
any
issues,
and
hopefully
thank
you
for
the
continued
laughing.
A
C
You
actually
sorry
I,
didn't
put
it
on
the
agenda.
I
had
a
quick
like
a
philosophical
question
about
the
load.
Balancer
that
sits
in
front
of
the
queue
be
like
API
server
I'm.
Just
out
of
curiosity,
I
realize
it's
like
deeply
embedded
in
the
code.
There's
like
like
a
packaged
directory
like
a
few
different
models,
and
it
describes
like
how
different
vendors
implement
that
API
server
and
the
load
balancer
for
API
service
I'm
interested
in
adding
some
options
for
like
AWS
like
this
load.
C
Balancer
comes
configured
without
like
load
without
like
connection
draining,
for
instance,
and
so
it's
easy
enough
to
like
have
like
an
API
called
make
an
API
call
to
this
little
balancer
once
it's
created.
But
I
would
like
to
have
like
everything
that
I
care
about
in
that
load.
Balancer
kind
of
get
configured
kind
of
auto
magically.
So
I,
don't
know.
A
A
feature
I
think
we
I
think
connection
connection
draining
right,
yeah
connections,
yeah,
that's
right
is
it's
pretty.
You
know
it's
pretty
generic
across
the
idea.
You
know
the
idea
is
certainly
generic
I
think
it's
okay.
You
have
things
which
language,
not
all
providers
implement.
We
try
to
you,
try
to
not
use
the
words
will
try
to
use
generic
language
so
if
it's
like,
instead
of
like
calling
an
ELB,
we'll
call
it
like
a
load
balancer,
but
this
like
connection
draining
is,
is
great.
I.
Think.
A
C
Yeah
I
think,
if
we
just
like
had
that
option
enabled
and
just
like
with
default
to
false,
just
like
so
like
I,
don't
know
what
the
behavior
is
in
AWS,
like
I,
haven't
felt
like
a
vanilla
load
bouncer
in
a
while,
like
but
I
think
that
doesn't
come
configured,
but
as
long
as
like
nothing,
pricing
happens
by
default
and
then
for
those
who
care
can
turn
it
on.
That's
probably
the
way
to
go
yeah
that.
A
Makes
sense-
and
we
actually
I
think
there's
a
so
yeah
that
would
be
a
great
feature
to
to
open
as
a
nation
as
a
feature
request
and
and
then
we
want
to
work
on
it.
That
would
be
great
as
well
and
I'm
happy
to
like
give
you
pointers
if
you
want
to,
we
have
there's
like
a
there's,
a
type
there's
a
nerds
like
us,
a
structure
which
is
dedicated
for
the
load
balancer
of
the
API.
A
A
Sounded
wonderful,
yeah
and
awesome
yeah.
Do
you
do
that?
And
yes,
then
we
collaborate
in
Mormon
my
camera
out
the
design
such
as
it
is
and
make
sure
it's
all
great,
and
if
anyone
else
has
any
thoughts,
then
they
can
know
chime
in
about
like
gotchas
or
something.
But
it
sounds.
It
sounds
like
you've
just
a
a
win
so.
F
A
A
Towards
like
automatically
choosing
the
correct
image
and
oh
yeah,
thank
you
for
fixing
the
test
element
sure
that
we
LG
TM
done
I,
think
I
think.
Actually,
this
is
another
topic
I
think
in
general
we
should
probably
switch
to
a
stretch
image
by
default
in
honest,
it's
a
little
late
for
111
because
sort
of
behind
already
like
I,
don't
know
112
or
113
type
time
frames.
It's
not
a
huge
pain
to
continue.
Building
the
Jessie
image
for
a
while,
but
eventually
they're
gonna,
stop
like
Debian
is
gonna.
Stop
security
updates
for
it.
F
A
A
I'm
trying
to
recall
we
are
much
closer.
The
official
Debian
am
I,
so
I'm
trying
to
record
exactly
what
differences
are,
but
I
don't
think
the
difference
is
there
sometimes
I
think
now
it's
mostly
about
free
installation
of
software.
So
the
way
cops
works
like
we
know.
Dup
works
like
if
software
isn't
installed,
it
will
go,
install
it
like
downloading
some
of
that
self.
Squared,
like
I'm,
loving
the
correct
version
of
doctor,
can
actually
be
pretty
stuff
because.
A
Have
one
coming
and
I'm
sure
we'll
have
another
one,
because
it
were
I,
see
the
like
the
there's
that
hardly
know
the
old.
The
giant
January's
issue
like
has
a
checker
and
I
think
we're
now
passing
the
first
three,
but
they
keep
adding
more
tests
for
the
latest
one.
So
we're
now
longer
passing
it,
but
I
don't
know
if
we
will
ever
catch
up
on
that
front.
I
feel
like
we're
gonna
catch
up
on
the
kubernetes
front
eventually,
but
we
will
never
never
fully
finish
the
interlocked
eights,
but
okay.
A
I
understand
it
there
were.
There
were
more
serious
issues
with
Jesse
where
we
needed
a
newer.
We
wanted
a
newer
kernel
and
they
were
kernel,
behave
better,
but,
yes,
I,
think
there's
some
sis
cuddles
that
were
like
some
some
some
kernel
flags
that
will
change
but
I,
think
they're,
mostly
convenience.
I
will
double-check
that,
but
it's
a
no.
So
it's
a
good
thing
to
check
on
the
thing
we
talked
about
initially
with
the
our
five
instances,
whether
the
official
Debian
stretch
image
does
correctly
resize
that
that
nvme
volume
root
volume.
G
A
B
A
A
Yeah,
it
would
be
I,
think
it'd
be
good
to
get
on
to
the
official
Debian
stretch
image
or
there
be
an
image
that
would
be
nice,
but
I
am
not
entirely
sure
what
our
differences
are
right
now,
but
that
would
that
would
be
good,
I.
Think
I,
don't
know
how
people
feel
about
moving
to
stretch
at
some
future
milestone
be
at
112,
113
111
stretch
by
default
should
say
I.
F
A
C
A
F
You
have
to
forcibly
switch
over
to
a
different
doctor
version,
because
1804
doesn't
include
the
doctor
that
we
do
right
now,
oh
good,
to
know
yeah
yeah,
so
I
I
actually
opened
a
PR.
If
you,
if
you
go
search
for
docker
and
whatever
the
code
I'd
seen,
you
know
whatever
the
you
boot
to
code
is
for
1804
ionic
Bionic
gay.
Basically,
if
you
force
that
is
the
image
and
then
you
change
the
Joker
version
within
your
manifest
to
be
whatever
is
listed
in
the
PR,
then
you
can
test
it
to
know.
E
Now
we're
in
the
process
of
changing
up
all
of
this
stuff,
because
for
various
reasons
we
need
to
get
north
of
Jesse,
but
so
I'm
not
I
have
no
opposition.
It
just
absolutely
needs
to
be
telegraphed,
as
a
in
111
stretch
will
be
the
default
and
Jesse
will
be
an
optional
use
and
in
112
that'll
be
deprecated
entirely
or
something
shaped
kind
of
like
that.
It
needs
to
be
telegraphed
now.
Okay,.
A
So
it's
not
a
big
deal
to
keep
building
them
for
a
while,
but
certainly
yes
turn
to
see
what
the
support
policy
is
on
on
Jesse
and
we
should
deprecated
it
because
people
really
should
be
using
stretch
eventually.
But
yes,
not
not
an
imminent,
imminent
deprecation,
but
certainly
I
think
I
would
like
to
move
to
stretch
as
the
default
I
think
that
makes
its
own
sense.
B
There
is
actually
one
thing
so
I
have
one
problem
solved
by
moving
to
stretch,
which
is
that
on
be
at
least
on
one
of
the
recent
Jessie
images.
I
was
installing
MDA
diem
and
it
has
a
it
is
a
set
up
hook
when
you
have
to
get
it
where
the
default
dead
comp
setting
is
that
it's
an
interactive
set
of
hook
and
there.
What
I'd,
like
I,
couldn't
put
like
me,
figure
out
how
to
how
to
change
that
setting,
and
it's
it's
a
non
interactive
setting
on
stretch.
B
A
A
There's
another
one:
on
top,
which
is
there
are
some
things.
Sometimes
it
will
still
ask
you
for
like
a
flag
and
you
can
like
set
up
like
I,
can
response
file,
but
yeah.
That
might
be
a
good
one,
sir.
Yes,
it
would
be
good
to
that's
a
good
reason
to
run
Jessie
as
well
to
run
stretches.
No
excuse
me,
yeah.
B
E
F
And
then
I
brought
up
to
other
things:
real
quick
at
two
other
PRS
of
mine,
I've,
three
open
PRS,
but
one
of
them
is
basil
updates.
What's
Justin,
you
already
said
you
were
up
for
I
just
keep
having
a
rebase
at
every
time.
We
have
changes,
I
just
rebase,
both
these
PRS.
Actually
so
I,
don't
know
if
there's
any
discussion
on
that,
but
we
did
want
to
get
the
basil
updates.
There's
a
newer
version
than
some
of
this
stuff,
but
I
didn't
push
that
because
you
actually
update
your
basil
locally.
F
So
I
thought
this
would
get
us
up
to
date
and
then
we
can
redo
that
later
and
then
the
this
the.
Unless
there's
any
comments
on
that,
okay
and
then
the
second
one
is
machine
type
generator.
So
we
talked
about
this
a
bit
ago.
Actually
we
had
one
issue
with
basically
every
time
I
spot
check
a
few
of
the
machine
types
that
we
have
hard-coded.
They
a
few
are
wrong
here
and
there
and
we
constantly
get
PR
for
this
stuff
and
they're,
not
in
a
good
order.
So
I
made
a
generator
a
while
ago.
A
Hi
I
think
it
sounds
great
I
think
I,
don't
know
if
you've
resolved
the
vise,
the
questionable
licensing
or
not
will
be
the
lack
of
licensing
on
that
library.
You
were
using
yeah
I
removed
that,
like
I,
mean
that's
a
politically
good
and
I
know
that
was
Pollak
I,
don't
know
if
you're
on
the
corset
but
I
think
Seth
was
also
working
on
this
with
you're
interested
in
that
yeah.