►
From YouTube: Kubernetes Kops Office Hours 20180720
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
this
is
cops
office
hours,
it
is
July
20th.
This
meeting
is
being
recorded
and
we
put
on
the
Internet
so
be
mindful
of
that.
We
do
have
a
couple
of
items
on
the
agenda.
Please
do
add
more
things
if
you
would
like
to
be
sure
to
discuss
them.
I
think,
as
we
just
mentioned,
the
probably
top
of
everyone's
mental
agenda
is
the
release
of
110,
which
is
I
will
say.
110
beta
is
late
and
I
apologize,
but
it
is
imminent.
A
I
think
we've
burned
down
all
the
PRS
that
have
gone
in
that
were
tagged
with
the
110
milestone.
Some
of
them
had
to
be
punted
to
111,
but
111
should
be
a
fairly
fast
follow-on.
110
is
has
been
our
plan
and
there
are
a
couple
of
new
or
PRS
that
came
in
over
the
last
24
hours,
so
we're
just
like
triaging
those
and
seeing
which
ones
should
go
into
110.
A
So
do
keep
an
eye
out
for
that.
And
yes,
if
you,
if
you
have
any
PRS
or
bugs
issues
that
you're
aware
of
that
really
someone
should
take
a
look
at
before
we
do
that.
Please
do
comment.
Comments
on
the
implementing
on
them
is
probably
easiest
way.
Now
everyone
has
permission
to
comment
and
then
it
will
appear
as
like,
recently
updated,
so
that
should
be
fairly
yeah.
Hopefully
that
will
work.
I,
don't
know.
If
anyone
has
any
particular
things,
they
want
to
call
attention
to
immediately.
A
I
wasn't
gave
us
some
credits,
but
we
had
a
couple
of
clusters
that
seemed
to
have
gone
in
some
sort
of
retry
loop
and
basically
have
exhausted
the
credits,
and
so
the
back
port
is
to
to
basically
slow
that
down
to
the
point
where
it's
an
affordable
rate
of
retries.
We
would
obviously
love
to
figure
out
the
actual
issue
and
I
did
put
out
a
list
of
IP
addresses
which
were
the
top
10
offenders
as
it
worth.
They
all
seem
to
be
fairly
locked
down.
A
So
I
figured
that
was
safe,
but
I
have
not
yet
heard
from
anyone
that
may
have
that.
But
if
you
have
a
cluster
that
is,
has
some
nodes
that
are
stuck
in
a
loop,
retrying,
node
up
or
or
protic
you
please
do
contact
me
on
slack
I
would
love
to
figure
out
the
underlying
issue.
We
had
one
thing
that
I
thought
was
it,
but
I
should
turn
to
be
something
different
around
ipv6,
but
I
should
not
leave
my
peeps,
it's
just
around
firewoman.
A
But
yes,
so
I
will
do
a
back
board
of
that
metal
so
include
some
other
fixes
as
well
and
yes
and
I.
Think,
probably
in
111
or
112.
There's
a
long
term
goal
to
get
this
into
a
CN
CF
account,
and
so
we
will
try
to
make
it
pull
from
a
either
a
different
s3
bucket
or
a
like
a
vanity
domain
which
we
can
reap
an
s3
bucket
or
a
whatever.
It
would
be
GCS
bucket,
or
you
know
that
different
back-end,
so
they
were
not
tied
to
anything.
A
And,
of
course,
if
you
are
worried
about
it
being
on
my
account,
there
is
ample
support
for
overriding
it
and
pointing
at
your
own
mirror.
So,
but
yes,
hopefully,
I
expect
once
we
get
the
once
we
get
the
retry
write
down
a
little
bit.
I
will
I
will
ask
eight
of
us
per
I
will
go
back
to
the
well
and
with
my
cup
in
hand
and
ask
them
for
a
couple,
more
credits.
B
Think
technically
we
failed.
We
still
fail
e
to
eat.
Oh
really,
okay,
a
lot
of
people
is
just
everyone's
asking.
I
bet
a
few
people
even
on
this
call
on
it.
Yeah.
A
Maybe
you
know,
we
confirm
you,
so
the
there
is
one
test.
I
think
that
would
be
a
good
thing
to
do.
There
is
one
test
that
fails
it.
It
isn't
a
real
failure.
It
is
a
failure
because
of
the
the
newer
well
I
think,
there's
a
newer,
yeah,
I
think
there's
a
newer
kernel
and
stretch
than
there
is
even
in
our
I
think
stretches
is
for
nine
and
we
have
a
fourth
for
kernel
in
the
in
our
Jessie
we
serve.
A
You
know
we
have
a
new
or
kernel
in
gender
stretch
kernel
and
the
four
nine
kernel
fails,
a
network
you
invest
because
it
looks
and
I
it
exposes
network
statistics
in
a
different
format.
So
it's
not
as
far
as
I
know,
it's
not
a
real
failure.
It
is
just
a
kernel,
proc,
FS
change,
and
so
yes,
we
can
put
the
action.
What
we
could
do
is
we
can
simply
disable
that
one
test
we
do.
We
do
exclude
a
couple
of
deaths
that
cost
design
does
not
install
the
dashboard
to
by
default.
A
C
Hey
is
there,
is
there
any
way
that
we
can
on
PR
against
those
cope,
do
images,
or
is
that
all
of
it
cool
you
yeah.
A
They
are
so
this
is.
This
is
topical
actually
so
this
they
are
maintained
in
a
repo
called
github,
/,
camara,
/,
cubed
boy
and
there's
a
directory
in
there
called
image
builder,
which
actually
is
a
snapshot
of
the
debian
tool
that
builds
images
whose
name
is
something
I
forget
but
doing
strip,
or
does
it
it's?
It's
a
wrapper
around
the
bootstrap.
So
we
wait.
It's
a
wrapper
around
the
tool
which
wraps
the
bootstrap
and
it
is
cold
and
rough
under
name
and
weight.
A
It's
the
same
tool
that
is
used
to
build
the
official
of
bootstrap
VZ
is
the
name
of
the
tool.
So
there's
a
butcher.
You
use
the
tool,
that's
used
to
build
the
official
Debian
images
and
we
have
our
own
tool
that
essentially
automates
running
bootstrap
easy.
So
it's
it's
run
on
ie,
so
it's
sort
of
a
more
I,
don't
say
most
codes,
it's
a
more
repeatable
process
anyway.
So,
if
you,
if
you
want
to
add
a.
C
C
A
So
the
cube,
deploy
director
project
image
builder
is
now
the
only
thing
left
in
there
and
we're
talking
about
moving
image
builder
to
the
top
level
of
that
repository
and
or
renaming
cube,
deploy
to
some
more
discoverable
and
or
changing
this
so
that
it
lives
under
sink
luster
lifecycle
or
cigarettes
little.
This
sort
of
that
this
rebelled,
pregame,
sort
of
hidden,
it's
hidden
and
it
predates
the
sort
of
more
rational
repo
organizational
structure.
This
is
people
I
think
is
doubly
result
now.
So,
but
yes,
the
images
are
there.
A
It
is
definitely
intended
to
be
repeatable
and
you
should
go
to
build
it
on
your
own
machine
and
you
should
feel
free
to
do
so
right.
If
you
have
any
doubts
and
the
kernel
that
we
build
is
well
if
you're
using
the
stretch
kernel
there
is
a
suspended
kernel
but
of
the
kernel
is
in
kokyo,
slash,
kubernetes,
run
all
I
believe
yep
Co,
PS,
/,
kubernetes
kernel
and
again
you
should
definitely
feel
free
to
do
that,
but
I
mean
the
hope
is
that
well.
A
I
hope
is
that
we
won't
have
to
build
our
own
kernel
anymore,
with
stretch
and
there's
also
the
potential
to
look
at
using
Amazon
system
Linux,
which
uses
system
D,
which
was
the
blocker
before
so
another
option
is
to
is
to
switch
to
that
and
then
get
a
switch
Amazon
system,
Linux
or
Debian
scratch.
Yet
a
third
option
is
there's
again
talk
of
our
building.
An
official
am
I
like
making
this
not
a
cop's
thing,
and
so
we
can
take
image
builder
or
figure
out
what
it
is
and
I
don't
know.
A
A
A
Change
and
so
but
yes
like
it
would
be
nice
to
effectively
say
you
know
this
other
project,
whatever
it
is,
is
doing
a
great
job.
So
we're
gonna,
like
you,
know,
outsource
our
yeah
exactly
but
like
that
I'm
gonna
and
we're
gonna
stop
building
it
after
a
year,
and
you
should
start
thinking
about
me,
but
we're
not
yet
saying
that.
So.
Oh
alright.
C
A
You
yeah
I
actually
found
it.
So
the
110
and
111
images
are
missing,
but
there's
an
open,
PR
number,
six,
nine
nine.
So
we
should
probably
get
actually
now
that
we're
in
other
way
the
only
people
in
there.
We
can
also
had
our
owners
into
that
repo
and
we
can
start
proving
our
own
PRS
as
it
were.
Cool
I'll.
Take
a
look,
thank
you
and
okay,
so
I
think
have
we
talked
enough
about
the
status
or
does
anyone
want
to
talk
more.
C
A
Everything
that
everyone's
been
sending
in
wonderful
PRS
and
doing
wonderful,
PR
reviews
and
that's
that's
great
and
that's
super
helpful
and
just
triaging
and
like
if
there
are
any
issues
that
need
to
be
looked
at
well,
obviously
helping
with
issues,
but
you
know
to
bump
them
as
a
as,
even
as
an
an
end
user
that
doesn't
necessarily
permissions.
That
would
be
wonderful
just
so
we
can
sort
of
make
sure
we
don't
miss
anything
important
like
the
package
issue.
That
I
think
you
raised,
the
there
is
a
there
is
I
see.
B
F
Personal
personal
requests:
this
is
sitting
in
PR
and
kind
of
like
in
a
retest
loop
for
a
few
weeks
now,
I'm,
not
sure
you
know
what
can
be
done
and
I
don't
I,
don't
know
what
this
the
cycle
is.
It's
all
kind
of
opaque
Jimmy,
but
just
at
least
one
person
real-life
person
not
about
interested
in
getting
this
merged
in,
but
I'm,
not
sure
what
the
you
know.
What
the
process
is
for
that.
So,
if
I
can
help
I'm
happy
to
but
I
just
kinda
want
you
to
put
that
up,
though
yeah.
A
Let
me
have
another
look
at
it.
I
think
my
concern
with
it
is
just
the
the
sort
of
sheer
weight
of
the
code
and
that
it
is
a
potentially
a
large
maintenance
burden
and
I
want
to
understand
whether
there's
anything
we
can
do
to
make
it
a
more
maintainable
in
future.
Pr
sure
I
think
that's.
That
is
the
concern.
That's
so
that's
why
it
keeps
getting
Mike
pushed
to
the
bottom
of
my
to-do
list,
which
essentially
means
I
never
get
to
it,
which
I
apologize
for
and
that's
sort
of,
yeah.
A
That's,
but
that's
why,
as
it
were
so
I,
don't
think
there's
anything
necessary
technically
wrong
with
it.
It's
just
the
sheer
weight
and
bird
the
burden
versus
the
value
right.
So
it
is.
It
is
valuable
to
understand
that
you
are
a
real
user
and
want
to
use
it
and
I
think
the
question
is
like
how
many
other
people
out
there
are
there,
but.
B
I
was
gonna,
say:
I
was
just
poking
through
it,
and
I
noticed
that
it's
at
least
behind
a
feature
flag.
You
know,
so
it
does
seem
low
risk
in
that
regard.
But
I
mean
it's
it's
a
lot
of
code,
even
when
I
go
to
take
a
look
at
it,
so
yeah
yeah,
you
know
I,
think
I.
Think,
though,
if
we
can
lower
the
risk,
the
minimum
amount
would
be
awesome.
F
Well
at
least
share
that
feedback
with
our
good
friends
it's
funnest
just
to
see
if
we
can,
you
know,
come
to
some
sort
of
broke
or
some
kind
of
deal
I
would
like
to
I
guess
the
question
I.
The
follow-up
question
I
have
is
really
based
on,
like
I
feel
like
I'm,
just
I'm,
using
their
fork
of
kubernetes
and
cops
well
of
cops,
really
and
I.
Think
I'm,
like
kind
of
pinned
to
a
version
that
doesn't
have
like
all
of
the
necessary
fixes
that
have
you
know,
come
over
time.
A
C
A
A
F
F
If
it
makes
sense
for
this,
so
there's
a
hole,
we
go
on
through
a
few
issues
about
like
github
issues
about
this
C
group
issue,
specifically
and
essentially
I
think
like
in
certain
I,
feel
as
though
they're
acting
for
six.
It's
like
hey,
put
your
you
know
like
specify
in
your
cops
cluster.
You
know
in
your
in
your
spec
file,
specify
you
know
these
these
places
for
key
groups,
and
it's
just
like
okay,
that
seems
you
know.
Other
people
in
the
internet
say
that's
a
fix.
F
I
say
that
seems
like
a
hack,
and
this
is
something
that
kind
of
shouldn't
get
like
fixed
in
caps,
but
you
know
far
be
it
for
me
to
know.
You
know
all
of
them
the
magic
that
happens
underneath
the
covers,
but
I
just
wanted
to
know
your
thoughts
on
like
whether
or
not
this
is
like
an
actual
issue.
There
should
be
fixed,
or
this
is
something
that
you
know
individual
folks
should
be
specifying
and
all
of
their
yeah
cluster
spec
files.
So
that's
the
question.
I
had
like.
What's
the
what's
the
plan
here,
your.
A
Absolute
right,
this
is
I
just
put
a
comment
on
it
myself
to
bump
it
to
the
top
of
the
lists.
I
would
say
we
should
fix
this
in
111,
so
I
think
what
happened
here
was
that
stage
so
early
early.
We
had
to
use
different
C
groups
to
make
it
work
on
a
wouldn't
and
we're
talking
about,
like
I,
think
before
even
1404.
So
this
is
a
wide
know
me
it
was
1404,
but
anyway
it
was.
A
It
was
a
while
ago
and
I
think
we've
stuck
with
those,
but
it
does
few
warnings
which
seem
benign,
but
what
we
should
do
is
try
to
fix
those
warnings
and
see.
What's
going
on,
I
guess,
upstream,
I
presume
that
we
can
yeah
I,
think
yeah.
We
should
definitely
fix
this
as
far
as
I
knew.
No,
it
doesn't
cause
any
impact,
but
right
we
should.
It
is
annoying,
and
my
cabin
logs
spewing
is
not
good
for
stop
and
so
yeah.
Thank
you
for
drawing
attention
to
it
and
yeah
it'll
be
good
too.
A
So
in
terms
of
timing,
you'll
be
good
to
do
it
I
think
we
can
probably
change
the
defaults
in
so
it's
safe
to.
What
we'll
do
is
we'll
change
it
so
that
the
defaults
are
in
fact
those
cgroups
are
set
in
automatically
if
you
are
running
kubernetes
greater
than
equal
to
111,
which
is
sort
of
how
we
introduce
new
features.
So
if
you
rang
an
existing
cluster,
it
won't
change,
but
you
also
get
it
I
want
to
validate
that.
A
You
know
I
was
actually
think
it'll
bit
more
about
the
buttons
thing
and
one
of
the
interesting
things
could
be.
The
machine
controller
might
not
be
that
far
away
for
nodes
and
I
presume
that
or
question
like
the
value
of
spa.
Tensed
is
greater
for
the
nodes
and
it
is
for
the
Masters
question.
Mark
I
mean:
do
you
run
your
mouse?
Buttons
is
a
way
to
do
it's.
It's
still
running
spot
instances
right
to
automatic
bidding
for
spots.
Yes,.
F
And
so
I'd
say
yes
for
the
nodes,
definitely
just
for
our
particular
use
case.
We
want
to
make
sure
that
we
kind
of
like
using
large
larger
instance
types
and
those
getting
a
little
more
costly
on
either
in
AWS,
so
it
makes
so
our
preference.
If
we
had
to
pick
you
know
which
one
came
first
nodes,
but
definitely
you
know
the
entire
cluster.
We
would
like
to
run
it
in
this
fine
ants.
Okay,.
A
F
A
A
Alright,
next
on
the
agenda,
which
I
think
actually
be
covered
already
is:
where
is
the
images
had
a
PR?
Is
there
an
official
place
of
account
address
yeah,
yeah,
you're,
good,
okay,
also
man?
Yes,
the
the
long-term
goal,
long
term,
the
gold
that
has
been
on
the
agenda
for
a
long
time
has
been
to
is
to
get
this
into
a
CN
CF
accounts
of
it
so
that
it's
not
linked
to
any
individual
and
the
the
challenge
to
date
has
primarily
been
around
figuring
out.
E
A
I
think
we're
still
gonna
I
think
we
have
cut
a
110
released
branch,
but
we
haven't
I
wife
I
mean
any
any
PRS
and
under
master
I,
think
we'll
still
go
into
the
110.
I
will
share
with
them,
but
well
across
her
fast-forward
merge
them
all
across
once
we
cut
the
beta.
That
will
no
longer
be
the
case,
and
particularly
once
we
cut
111
that
will
no
longer
want
11
alpha.
That
will
no
longer
be
the
case
there.
We
will
do
individual
cherry-pick
so
you.
A
Actually,
there
are
a
couple
of
PRS
that
have
a
label
like
cherry-pick
candidate,
and
those
are
thinking
about
those
are
the
ones.
I
think
should
go
into
the
one
nine
branch
which
is
otherwise
you
know
effectively
closed
right.
Certainly,
there
would
certainly
no
automatic
PRS.
Certainly
peers
will
not
automatically
go
into
the
one
line-
branch
we're
not
going
to
the
one,
my
branch
by
default
any
more
and
we
have
to
manually
curate
any
prz
wants
a
back
porch.
So
for
now
anything
in
master
will
go
into
110
once
we
do
111
alpha.
A
A
A
I
think
we
have
a
new
approver
I'm,
just
seeing
whether
it
actually
is
going
through.
But
my
explain
has
been
doing
a
wonderful
job
of
reviewing
PRS
and
we
proposed
him
this
morning
as
a
as
a
approver,
which
means
he's
able
to
actually
approve
yours
and
not
just
review
them,
and
it
looks
like
yes,
Rodrigo
OGG
after
a
second
data,
so
that
congratulations
and
thank
you
for
the
work
Mike.
Thank.
B
A
And
also,
if
anyone
you
know
else
wants
to
get
involved
in
PR
review
or
approve
are
approving,
please
you
know
what
I
always
say
is
you
can
always
comment
on
the
PRS
and
sort
of
acts
like
a
reviewer,
even
if
your
LG
Jam
doesn't
technically
mean
anything,
and
so
please
feel
free
to
do
that
and
I
think
you
guys
been
noticing
some
people
doing
that.
So
thank
you
for
the
people,
this
stepping
all
helping
with
the
review
that
it
is.
It
is
very
helpful
and
thank
you
for
contributing.