►
From YouTube: Kubernetes kops office hours 20190201
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
There
is
some
challenge
around
YouTube
authentication,
which
I
am
working
through
with
some
help,
but
hopefully
this
one
will
actually
put
anything
that
the
AWS
ones
are
being
put
on
the
Internet
of
this
particular
one
I,
don't
actually,
if
I
think
some
of
them
are
being
uploaded,
but
they
are
necessary
being
put
into
the
channel
anyway.
Eventually
this
will
make
its
way
onto
the
internet,
and
you
will
be
seen
there
is
so
be
mindful
of
that.
A
Well,
the
first
two
are
not
physically
present,
so
the
just
a
public
service
announcement
that
as
I'm
sure
people
are
aware,
AWS
testing
or
oops
I.
My
machine
is
being
very
slow,
I,
don't
know
if
I
froze
we
can
see
it.
I
can
see
it.
Okay,
new
kami,
all
right,
I
will
just
keep
talking
and
yell
at
me.
If
I
am
talking
over
someone
or
something
like
that,
next
time,
I
feel
so
free,
so
it
was.
Testing
is
currently
off,
not
working
right
now.
A
The
there's
some
confusion
about
paying
the
bill
I
think
is
the
best
way
to
put
it
that
will
get
resolved
fairly
soon,
I
imagine
and
so
the
ADA
goes.
Tests
are
currently
non-blocking
in
KK
in
cops.
We
are
doing
our
best.
We
don't
have
other
tests,
we
have
unit
tests,
but
we
don't
have
other
integration
tests
other
than
I
did
on
AWS
and
bringing
up
a
second
platform
would
probably
be
a
fair
amount
of
work.
Although
we
do
have
some
tests
on
jeez
no
anyway,
I
can
look
at
that,
but
hopefully
eight
of
us
tests.
A
That
is
the
sure
way
to
make
sure
I
do
tests
come
back
online
is
invest
a
lot
of
time
in
the
second
platform
and
then
a
minute.
It
is
ready.
They
discuss
we'll
be
back
online,
but
until
that
happens
we
are
merging
pretty
cautiously
so
OpenStack
PRS,
for
example,
which
is
great
to
see
we
are
feeling
like.
We
can
merge
those
because
we
wouldn't
get
a
lot
of
signal
on
them
anyway
from
the
eight
of
us
tests,
so
we're
relying
on
in
on
non
integration
tests
or
non
cloud
tests.
A
C
A
A
So
there's,
hopefully,
no
no
short-term
problem
in
terms
of
figuring
out
I
think
was
like
9
million
dollars
if
I
remember
so
that
should
cover
a
couple
of
years
of
bills,
hopefully
but
yeah.
So
if
we
have
multiple
locations,
then
we
do
this.
I
did
debate.
Actually
I
tried
a
bunch
of
things.
I
did
debate,
putting
those
artifacts.
A
When
you
create
a
cluster,
one
thing
we
could
do
is
we
could
copy
the
artifacts
into
your
bucket
so
that
go
yeah
yeah
so
that
essentially,
once
your
cluster
is
created,
you're,
never
reliant
on
something
or
you're,
not
relying
on
something
outside
the
cluster.
The
challenge
is
at
least
note
up.
The
first
component
would
have
to
be
read
world
readable.
A
What
would
probably
have
to
world
readable,
which
means
that,
like
then,
you're
important
bucket
has
a
mix
of
ackles
in
there,
which
is
a
little
scary,
and
then
the
other
problem
is
sort
of
a
user
thing
like
uploading.
A
big
binary
will
make
cluster
creation
much
slower,
so
I'm,
starting
with
a
simple
mirror,
and
if
anyone
has
any
clever
ideas
about
how
to
do
that,
whether
we
should
have
like
some
sort
of
cache
that
we
auto
populate
or
like
do
it
on
the
server.
A
Somehow
I
am
very
I'm
all
ears,
but
first
step,
which
probably
is
likely
going
to
be
setting
up
a
a
list
of
URLs
and
we
can
pull
by,
will
always
verify
by
hash.
So
it
doesn't
necessarily
particularly
matter
what's
like
where
you
do
or
don't
get
it
from
or
if
one
of
those
is
offline
or
if
one
of
them
gets
like
to
someone
in
some
random
persons.
Website
like
it
doesn't
matter,
because
the
hash
is
what
what
enforces
things
cool.
C
Yeah
I
just
want
to
make
sure
that
our
tests
being
broken.
A
Weren't
the
only
issue
yeah,
it's
the
tests
are
on
their
own
account.
Yes,
it's
a
deliberate
thing
right.
The
that
account
is
a.
It
runs
on
a
runs
parsh.
It
runs
code
that
is
less
vetted
like
PRS
are
generally
less
that
it,
and
so
we
don't
like
it
is
its
own
account
and
I
think
it
will
always
be
a
separate
account
from
any
serving
infrastructure.
Of
course
proof
it's.
A
A
Okay,
if
there's
nothing
else,
I
will
move
on
there.
Some
other
I
put
two
other
things
in
the
agenda:
I'll
just
talk
about
them
quickly
and
then
people.
Let
me
answer
like
the
things
that
other
people
wrote:
I
want
to
dominate
the
conversation
like
it's
great,
to
see
OpenStack
in
progress
I.
If
I
don't
want
to
talk
about
that,
that's
great
and
then
the
other
thing
is
that
the
I
put
up
the
pr
for
the
SE
d3
upgrade
Doc's,
describing
like
the
exact
path
and
sequence.
A
It's
basically
we
talked
about
last
time,
but
there
are
words
now
or
written
down
words
now,
basically,
because
the
move
from
entity
to
Ted,
C
3
is
disruptive
for
a
check
clusters.
We
sort
of
do
TLS
and
the
Calico
too
Ciardi
migration
at
the
same
time,
but
there
are
steps
there
which
I
have
tested
on
how
to
break
out
break
out
those
steps,
so
you
can
run
them
independently
if
you
want
to
I,
wouldn't
necessarily
recommend
it
because
it
doesn't
necessarily
buy
you
a
lot
in
terms
of
like.
E
I
want
well
I
was
trying
to
figure
out
the
time,
I
work
for
tiger
and
we
do
calico.
So
my
my
curiosity
was
why
why
do
you
have
to
go
to
see
our
Dino
or,
what's
the
motivation,
I
guess
for
doing
that?
I
think.
A
A
E
A
A
E
E
Yeah,
most
local,
instead
of
being
calico
backed
but
I,
mean
that
that
is
actually
coming
soon.
There's
work
being
done
on
that,
but
no,
I
was
just
I
somebody
brought
up
that
it
was
going
to
be
disrupted,
they'd
be
disrupted
to
go
from
two
to
three
and
well.
We
I
added
the
non-destructive
destroy
of
option
but
didn't
know
sorry
for
sed,
not
from
two
to
three
with
the
CCR
DS
yeah.
A
I'm
Ryan
and
I
tested
some
of
this,
and
it's
it's
come
I
think
we
found
some
challenges
like
when
we
go
from
its
82
to
83.
We
don't
necessarily
we
migrate.
We
do.
One
of
the
things
is
the
way
we
do.
The
rolling
update
is
really
basically
do
each
master
as
a
unit,
and
we
don't
we
don't
like
rollout
at
CD,
the
Umbrella
API
server
or
like
roll
out.
You
know
have
the
ability
to
do
multiple
steps
of
that
nature.
It
sort
of
externally
driven
which
mates
made
it
challenging
the
other.
A
A
A
A
Are
you
talking
about
a
better
way
to
get
from
where
we
are,
which
is
I,
think
users
running
calico
with
EDD
direct
at
some
stage,
probably
glide
around
by
time
it's
1:13.
We
have
to
be
running
at
CD
3
and
we
were
by
the
time
we
get
to
like
at
some
stage
this
year.
We
would
like
to
be
running
at
city
with
TLS
and
we
want
to
be
running.
Calico
3
is
calico.
2
is
older,
right,
yeah
yeah.
A
E
Yeah,
okay,
all
right
I
just
saw
this
this
morning,
so
it
was
kind
of
sure
I
haven't
thought
a
lot
or
talked
to
any
colleagues
about
it.
Much
so
I
just
wanted
to
get
a
little
input.
Yeah.
A
E
Quick
question
the
recording
from
last
week.
Is
it
as
it
made
it
anywhere?
Is
that
still
the
same
boat
is
what
you
mentioned
earlier.
I.
E
A
I
will
double-check
and
I
will
try
to
put
them
in
the
I'll
put
links
in
the
I'll
put
links
in
the
in
the
in
the
agenda
notes
and
either
they
will
also
be
in
the
channel
if
I
now
have
permissions
or
if
they
will
be
not
in
the
in
the
cops
channel.
If
I
do
not
yet
have
permissions,
but
at
least
the
links
will
be
there
and
you'll
be
watching
my
thrilling
YouTube,
which
I
think
it
looks
like
I
liked
my
coop
contour
Jeff
and
my
coop
contour
quest
last
december.
B
A
A
A
The
challenge
is:
if
we
go
straight
to
calico
b3
with
CR
DS,
we
do
it's
effectively
a
network
partition,
so
you
basically
have
to
go
the
the
strategy
that
recommending
is
to
go
as
fast
as
you
can
and
not
validate
and
cross
fingers
that
it
comes
back
happy,
which
is,
admittedly,
a
little
scary
sure
I
guess
where
I
very
much
welcome
input.
But
yes,
it
should
work
and
I
think
if
we
yeah
once
we
get
the
if
we're
able
to
get
these
merged,
I
will
look
at
your
issue,
but
if
we're
able
to
these
merged.
A
A
For
example,
the
one
I
definitely
tested
was
cops.
111
kubernetes
111
is
at
CD
2
and
then
you
go
to
EDD
manager
with
that
CD
and
there
you
go
xev
3
with
that
city
manager,
and
then
you
go
to
112.
This
is
with
calico
and
that
one
you
do
as
fast
as
you
can,
because
that's
one
the
introduced
to
see
our
DS
with
this
PR.
So
yes,
it's,
it
should
work.
A
But
yes,
I
will
double-check
your
scenario
as
well,
and
if
you're
able
to
double
check
that
be
great
yeah,
it's
a
little
awkward
that
we
can't
merge
things
right
now.
We
don't
need
a
TD
manager
to
go
from
Calico
v2
to
v3
right
if
we're
already
running
at
CD,
3
correct
okay,
because
we
have
TLS
on
so
we
can't
use
that
CD
manager
at
the
moment,
yeah
so
I
see
that
the
we
do
add
TLS
support
now
yeah,
that's
nice
that
turns
on
I
should
test
that.
A
A
I
think
I
think
the
primary
failure
is
is
just
the
deployment
manifested
have
been
using
cops
because
I
think
it
was
originally
written
to
to
have
a
certain
configuration
and
the
newer
version
of
Etsy
d3
that
it's
trying
to
use
isn't
compatible
with
some
of
the
config.
That's
thing
passed
in
I,
don't
think
it
was
purely
an
issue
with
Calico
itself.
The
PR
is,
or
the
issue
number
is
60
to
61.
That
eye-opening
cops.
E
A
E
A
G
The
put
up
an
issue
in
November
talking
about
like
moving
to
Los
templates
as
it
offers.
Of
course,
the
new
features
that
we
just
provide
slide
covers
mortals
make
sense.
This
is
Sarah
and
I
saw
there
was
a
PR
credits
actually
to
get
some
of
that
in,
but
not
sure
how
exactly
like,
if
it
is
actually
everything
or
if
it's
just
part
of
it,
so
just
wondering
what
stages,
and
that
is
if
anyone
is
aware.
C
I
haven't
done
too
deep
on
this
one.
Yet
there's
a
bunch
of
changes
I
mean
so
he
pushed
a
lot
of
changes
since
I
last
looked
at
it
and
I
know
he
needs
to
rebates.
Based
on
some
other
changes.
We've
made
I
mean
it's
a
big
PR
right
yeah.
So
I
would
imagine
that
I,
don't
I,
don't
know
if
we
can
guarantee
this
for
112,
but
I.
Imagine
at
least
soon
after
I
mean
it
I
want
it
I
think
a
lot
of
people
on
this
call
probably
want
this.
So
right.
H
C
G
I
I
C
I
think
he
wants
to
I
think
he's
trying
to.
He
wants
the
review
and
then
I'm
final
rebase
and
then
we'll
move
on,
but
I
think
I
think
the
one
thing
we
can
do
is
make
sure
that,
like
Alex's
recommendation
of
adding
more
configurations
that
we
should
try
to
make
sure
those
aren't
added
on
within
this,
we
can
all
we
can
just
add
another
PR.
You
know
people
should
add
subsequent
PRS
off
of
this
to
keep
adding
more
work
on
we
don't
we
don't
need
to
bacon
all
into
this
PR,
so
it
keeps
growing.
C
You
know
so,
as
long
as
we
can
encourage
people
to
do
that,
you
know
if
you
find
any
holes,
you
know
I'd
encourage
you.
You
know
branch
off
of
this
one
start
working
on
it
and
then
you
know
once
we
get
this
one
merged,
we
can
add
more
features,
but
but
yeah
I'd
love
to
see
this
get
merged.
I'll
definitely
take
a
look
at
this
today
as
well.
Yeah.
G
A
Yeah
I
wanted
to
dig
into
what
people
are.
Is
it
the
mixture
of
spot
or
spotting
on
demand,
or
is
it
the
mixture
of
insta
types
or
is
it?
What
are
the
features
that
we
are
looking
for
here,
because
I
think
we
should
get
this
in,
but
I
think
also
there's
relevance
to
the
work
in
cluster
API
and,
if,
like
I,
think
one
of
things
we
can
do
is
where
we
have
this.
F
C
And
we
we
do
we're
similar
for
every
instance
type.
We
use
that
we
have
it
in
at
least
three
regions,
which
means
three
scaling
roots.
You
know,
launch
configs,
and
then
we
double
it
so
that
we
have
spot
and
non
spot
instances
and
stuff
like
that.
So
we
have
a
lot
and
we
have
it
all
automated.
So
it's
fine
by
us,
but
like
I
I,
think
this
would
clearly
simplify
it
a
little
bit
either
way,
I'm
happy
it
for
it
to
be
in
cluster
API
to
I.
A
Short
term
at
least
I,
don't
know,
yeah
I
think
we
should
get
this
I'm
not
saying
we
should
yeah
I
think
we
should
get
I'm
saying
I'm,
not
saying
I'm,
just
I
think
we
should
definitely
communicate
the
requirement
to
totally
what
about
do.
You
could
do
people
to?
Are
people
excited
about
mixing
spot
and
non
spot,
or
is
that
less
exciting?
But
it's.
G
Definitely
for
us
interesting,
especially
because
we
have
a
pretty
small
cluster,
so
we're
actually
running
everything
on
spots,
including
our
masters,
which
is
I,
know
bit
tricky,
but
we
have
stable
spots.
So
it's
fine
for
us,
but
we
would
still
like
for
a
backup
to
go
to
on
demand,
so
that
could
be
automated
by
them.
Yes,
that
would
be
perfect,
of
course,
Tony.
C
A
That's
also
my
understanding.
Okay,
all
right.
Well
then,
my
bad
that's
I
mean
I,
guess
there's.
My
guess
is
that
this
is
the
next
gen
of
late
yeah,
yeah
I,
don't
know,
then
what
I
know
where
there's
a
reason
to
use
fleet
if
you
have
whatever
this
is
actually
called
mixed
instance
policy.
Are
you
catching.
C
A
Okay.
The
other
thing
that
we
sort
of
skipped
over
which
which
I
don't
know
if
you
want
to
talk
about,
is
like
I.
We
have
a
lot
of
peers
going
in
for
OpenStack,
which
is
wonderful,
I,
don't
know
if
anyone
is
here
that
is
working
on
OpenStack.
That
wants
to
talk
about
it
or
ask
any
questions.
Otherwise,
we
are
gonna,
keep
reviewing
PRS,
I,
guess,
yeah.
H
I
did
the
my
name
is
Derek
lemon
I
am
kind
of
new
I.
Did
the
initial
TR
for
OpenStack
I
work
for
Cisco?
We
were
using
Keefe
spray
before
and
we
had
some
issues
with
people
deploying
more
clusters
than
maybe
they
need
so
we're
trying
to
get
into
more
of
an
instance
group
supported
model,
so
I
think
me
and
Jesse
are
testing
it
largely
it
were
really
beating
the
crap
out
of
it
were
destroying
scaling
up.
H
You
know
deleting
things
updating,
so
it's
still
pretty
new
and
then
we'll
try
to
get
issues
fixed
as
they
come
right
now
we
have
just
me
and
Jesse's
environment
and
I,
don't
know
how
different
they
are
than
others.
What
we
could
really
use
is
an
environment
with
designate
right,
because
neither
me
or
Jesse
I
have
doesn't
me
so
I
know
we
got
some
requests
in
the
open
lab.
Hopefully
we
can
get
some
end
to
end
tests
written
for
opens
accents.
H
A
Wonderful,
we
really
appreciate
it
and
if
there's
anything,
we
can
do
other
than
review
or
PRS
in
a
more
timely
manner.
Let
us
know
I
I,
think
one
of
the
big
blockers
has
always
been
figuring
out.
What
the
right
way
for
people
did
or
not
that
don't
have
a
private
OpenStack
to
test
is
I've,
been
told,
Alibaba,
Cloud
I
think
it's
like
a
good
one
and
I
think
OVH
is
also
OpenStack
based.
So
maybe
we
can
also
try
some
of
those
but
yeah
all
the
work.
H
Yeah
I'm
also
very
interested
in
the
cluster
API
stuff,
moving,
something
there,
because
I
think
that
might
be
consumable
for
any
kind
of
auto
escape
auto
scaling
capability
that
we
want
to
introduce
into
OpenStack
as
well.
That's
since
we
have
nothing
wait
in
regards
to
heat
we're
not
using
heat
so.
A
Yes,
it
absolutely
I
mean
I,
don't
know
if
there
is
a
there's,
a
cluster
API
provider,
AWS,
there's
a
cluster
API
provider,
GCP
I,
think
and
then
there
is
a
cluster
API
provider.
Openstack
I,
don't
know
who's
working
on
it,
I'm
just
pulling
it
up,
but
it
looks
like
there's.
Some
of
this
looks
pretty
active
today,
that's
PR
two
days
ago.
So
but
yes,
that
will
exactly
enable
you
to
do
the
cluster
autoscaler
is
going
to
integrate
with
cluster
API,
so
it
will
be
in
theory
a
I
wanna
say
better.
It
will
be
a.
A
A
A
The
you
mean
on
AWS.
Would
we
use
this
instead
of
auto-scaling
progressed?
Very
possibly
auto-scaling
groups
do
have
two
nice
things:
they
they
both
do
the
auto
scaling
and
they
also
are
self
rebooting,
as
it
were
self.
Repairing
so,
like
our
masters
still
run
ánotá
skating
groups
with
a
min
and
Max
of
one
which
is
in
my.
So
you
can
like
turn
off
all
your
nodes
or
like
a
power
failure
scenario,
and
they
all
come
back.
Fingers
crossed
touch
record,
but
yes,
they'll
come
back.
A
But
yes,
I
would
certainly
encourage
you
to
look
at
the
at
the
cluster
api
provider.
Openstack
I
think
it'll
be
great
to
see
whether
we
could
we
could.
We
haven't
yet
done
an
integration,
I'm
hoping
we
can
get
to
the
point
where
we
do
have
an
integration,
at
least
for
the
nodes
with
cups.
The
Masters
is
a
trickier
situation,
but
we
can
at
least
get
the
the
nodes
working
in
this
way
and
then
auto
scaling
will
work.
A
What
is
getting
your
note
to
work
once
the
order
it
was
like
once
the
cluster
autoscaler
goes
through
the
cluster
API.
But
if
you
are
interested
in
this,
every
there
is
a
cluster.
The
cluster
API
is
being
done
as
part
of
sync
cluster
lifecycle.
It's
a
sub
Protista
cluster
lifecycle,
sig
cluster
life
cycle,
meets
every
other
Tuesday
I.
Think
and
cluster
API
has
a
breakout
meeting,
which
is
quite
good.
Every
other
Wednesday
and
I
would
encourage
you
to
attend
that
something
that
we
it's
it's
a
good.
It's
it's
a
fun.
A
It's
like
a
fun
new
project
that
I
think
is
moving
quite
quickly
and
it
feels
like
it's
the
right
place
to
do
that.
It'll,
some
yeah,
it
cops
doesn't
run
after,
like
cops,
only
runs
when
you
create
or
update
a
cluster,
whereas
the
autoscaler
as
a
controller
or
the
cluster
has
a
controller,
and
so
it
runs
all
the
time
yeah.
It
sounds
good.