►
From YouTube: 20180509 kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
The
CI
cross
was
meant
to
build
call
artifacts,
even
though
they're
untested
it
was
to
build
them
all,
and
that
was
what
was
used
in
the
upgrade
test,
so
the
the
regular
tests
all
work
because
they're
using
the
CI
URL
now
the
CI
cross
URL
stopped
working
a
long
time
ago,
because
somebody
intestine
for
a
stopped
publishing
the
artifacts.
So
when
the
cross
up
create
the
upgrade
job
which
depended
upon
the
cross
artifacts,
even
though
again
they're
untested
get
pushed
into
that
location,
they
were
out
of
date,
so
they
were
never
updated.
B
So,
as
a
result,
all
of
the
upgrade
tests
have
been
failing
in
the
one
in
110,
plus,
there's
actually
failing
before
then
was
fairly
way
before
them,
because
I
think
the
last
update
to
the
CI
cross-products
last
time,
I
looked
was
in
like
a
one.
Nine
ish
build
of
some
kind
right
and
someone
stopped
doing
it,
then.
C
B
C
A
A
Yeah,
so
it's
a
I
like
ci
cross
is
running
at
head
and
it's
always
only
running
as
head,
not
for
the
release
branches.
Hence
we
have
CI
cross
images,
always
at
head
running
like
on
every
hour
interval
or
something
like
that.
But
when
we
release
when
we
cut
a
new
release
branch
and
go
to
112,
no
more
111
image
bills
will
be
uploaded
and,
as
the
CI
should
be
very
fast
used
for,
merge
builds.
No
images
are
at
all
published
so
or
also
for
our
amd64
builds.
A
A
The
other
thing
that
technically
we
could
try
to
ask
for,
is
to
make
this
a
normal
CI
job
used
for
the
merge,
both
and
stuff,
like
that
start,
pushing
images
that
will
that
would
like
make
the
or
delay
the
normal
procedure
and
make
it
take
like
five
minutes
longer
or
something
like
that
so
or
ten.
That's
why
it
doesn't
do
it
at
this
moment,
but.
D
A
We're
using
CI
cross
because
it
runs
periodically
every
hour
or
similar,
so
it's
always
like
it
or
it
could
be
one
comment
or
two
commits
behind
if
you
recently
pushed,
but
that
is
better
from
the
suggesting
perspective
as
we
don't
push
for
every
commit.
Instead,
we
push
every
three
comments
or
something
and
we
can
still
use
them.
B
There
should
be
only
be
one
like
the
fact
that
we
have
two
separate
CI
locations.
Is
bananas
like
no
one
understands
it.
It
doesn't
make
any
sense
at
all
the
idea
that
we
aren't
publishing
all
the
artifacts
that
we
need,
because
there's
some
type
of
overload
there's
in
garbage
collection.
It
also
seems
fundamentally
broken
to
me
because
it
there's
no
reason
we
can't
garbage
collect
these
things
after
the
fact,
and
it
seems
like
a
broken
piece
of
the
infrastructure
right.
B
I
do
think
that
the
artifact
should
be
published
until
the
PR
is
merged
and
then
what's
the
peer
is
merged.
It
can
be
garbage
collected
right,
because
that
way,
you
still
have
the
artifacts
that
any
person
can
reproduce
the
issues
they're
seeing
in
CI
and
any
point
in
time
right,
because
the
the
problem
we
have
here
is.
We
have
like
a
weird
disconnect,
like
a
person,
has
an
issue
on
a
bond
right,
whatever
it
may
be,
in
order
for
them
to
actually
reproduce
that
thing,
they
need
to
rebuild
everything
and
their
local
environment.
B
We
push
to
their
own
local
registry
and
try
to
pull
down
artifacts
that
that
are
basically
part
of
their
PR
right.
This
seems
like
a
broken
thing
of
testing
more
time,
we're
trying
to
hack
around
it,
but
but
you
know
it's
kind
of
like
we
want
to
pick
our
battles.
So
one
thing
I
think
we
should
only
depend
upon
one
location
right:
I
just
make
that
the
canonical
location
can,
we
agree,
agree,
agree
or
disagree
on
that
bit.
C
A
A
There's
a
lot
of
churn
there
like
it
runs
nearly
every
time
something
happens,
but
CI
periodic
or
CI
cross
is
a
this
periodic
job
that
just
checks
out
master
every
now
and
then
builds
and
push
it
pushes
images
and
for
us
to
easily
use
consume
this
in
cubed
I
mean
we
want
to
use
the
pre-built
images
and
then
the
current
only
solution.
Currently
the
only
solution
is
to
use
these
images
that
are,
that
only
have
commits
from
SIA
to
see
periodic
see
I'll
cross
job.
A
B
Kind
of
banana
stuff
right
there's
there
there's
no
reason,
there's
no
good
reason.
I
can
see.
You
know
we
have
the
cloud
and
as
long
as
we
garbage
collect,
it
should
be
reasonable.
There
are
currently
nine
hundred
open,
PRS
and
hey
it's
a
forcing
function.
There's
no
reason
why
we
can't
keep
the
images
for
every
stay
little
thing
and
not
have
the
not
have
it
be
periodic,
but
have
it
be,
pull
every
pull
request
and
then,
when
the
pull
request
is
merged
and
it's
garbage
collected?
B
A
I
I
think
that
makes
sense.
It's
just
a
feature
that
doesn't
exist
at
all
currently
and
there's
probably
a
lot
of
weight,
infrastructure
limits
or
like
in
the
way
it's
coded
right
now.
I
would
expect
from
just
briefly
looking
at
the
code
but
yeah.
That's
that's
the
idea.
Okay,
it's
like
build
for
all
pull
requests
garbage
collect
when
merged
and
build
for
every
master
commit.
That's.
E
A
CI,
so
if
you're,
looking
at
the
tag
for
like
CI
latest
when
you
install
the
binaries
you're,
getting
whatever
the
latest
binaries
are
at
that
time.
But
then,
when
you
go
to
pull
the
images
later,
there
may
have
been
an
update
in
the
time
between
when
you
install
the
binaries
and
when
you
try
to
pull
the
images
that
are
tagged
with
CI
latest.
So
the
only
way
that
you
can
get
a
consistent
deployment
from
kind
of
the
CI
builds
anyway
is
to
reference.
A
specific
sha,
like
Tim,
said
so
I.
Think.
A
A
D
D
Use
you
know
we
had
a
few
choices
that
we
are
talking
about
right.
We
need
to
prioritize
which
ones
we
would
like
and
which
ones
we
don't
like.
For
example,
the
question
from
Tim
was:
do
we
need
arm
images?
Is
that
a
yes
or
no?
If
it's?
Yes,
then
that
is
one
data
point
that
we
need
to
go
to
them.
Saying
if
you
give
us
a
bill,
then
it
should
have
arm
images
right.
B
B
They're
they're
publishing
things
that
go
and
tested
right,
and
if
it's
not
even
we
it's
like
the
testing
apparatus,
I
think
we
take
we
out
of
the
equation
and
talk
about
they.
The
testing
infrastructure
are,
they
are
publishing
artifacts
that
are
untested
and
they're.
Not
part
of
a
release
like
people
should
not
be
depending
upon
these
bits,
they're
only
for
CI.
B
B
A
B
No
reason
they
can't
do
that
right
now,
if
they
are
here
and
they're
actually
doing
it
sure
but
like
unless
they
are
actually
doing
it
or
have
their
ticket
federated
test.
Thinking
also
be,
as
I
mentioned,
they
can
publish
their
own
build
results
that
whatever
granularity
they
care
about,
and
it
doesn't
have
to
be
consuming
the
cross
built
CI
artifacts
right
they
could.
They
can
build
from
tip
of
master
nicely
and
still
publish
to
test
for
in
using
federated
testing.
B
D
Right
so
let's
take
that
of
the
equation.
First
then,
then,
the
next
question
is
we
want
the
the
latest
one
latest
1.1
1.9
1.10,
all
of
them
to
have
sharks
and
the
Shah
should
point
to
a
directory
and
the
directory
should
have
all
the
things
that
we
need,
which
is
at
this
point,
MB
64,
and
that's
what
we
are
going
to
ask
them
right
and
we
don't
really
care
about
how
many
jobs
they
run
as
long
as
they
give
us.
D
D
A
A
A
D
So
then,
then
the
question
is
okay,
so
let's
assume
that
we
can
get
rid
of
this
here
across
from
from
the
code
right
in
1.9,
110
and
latest
master.
So
then
what
we'll
have
to
go
do?
Is
we
like
to
go
to
test
in
front
and
make
sure
that
all
the
conflicts
are
fixed
to
point
to
the
correct
buckets
right
now,
unless
we
how
we
get
a
contract
in
place
for
them
to
set
up
the
jobs?
So
that's
the
third
option.
A
A
B
Why
don't
we?
Because
we're
kind
of
like
where
we're
dovetailing
around
we're,
not
definitely
we're
just
spiraling
around
tests
infra,
when
we
should
really
be
talking
with
them
about
some
of
this,
because
it's
clear
that
a
bunch
of
stuff
has
changed
and
been
broken
for
a
while
now
and
we
need
to
go
and
fix
it
with
them
in
a
consistent
way,
and
it
makes
sense
and
I
think
I'm
not
very
happy
with
with
how
we
are
referencing
artifacts
from
GCSE
buckets
for
different
build
scenarios.
It
kind
of
it's
a
very
unsettling.
To
be
honest,.
D
A
A
B
A
B
D
D
Check
I'm,
sorry,
we
don't
have
anything
written
down
in
any
of
the
peers,
it's
just
bits
and
pieces
floating
around,
so
we
don't
have
a
single
place
way
to
where
you
can
read
about
this.
The
latest
attempt
I
have
is
my
PR,
where
I'm
listing
out
I
know
the
gsutil
commands
for
going
and
looking
through
the
artifacts
other
than
that
we
don't
really
have
anything
yeah.
B
Yeah,
it's
it's
also.
You
know
it
points
to
a
fundamental
flaws
that,
like
this
stuff
shouldn't
be
so
hard
right
like
this
is
just
see
I
and
a
lot
of
other
people
have
their
own
CI
and
CD.
You
know,
testing
pipelines,
I
think
the
reason
it's
so
hard
is
we
kind
of
over
engineered
a
solution
to
solve
a
set
of
problems
and
now
we're
living
with,
like
the
legacy
of
some
of
that
stuff.
So.
A
Okay,
the
next
topic
is
creating
a
new
GI
tracking
issue
from
hey.
B
That
was
me
yeah,
so
the
the
current,
the
last
GI
tracking
issue
has
a
bunch
of
stuff
in
there
and
I
think
we
should
probably
just
create
a
new
issue
and
I'll.
Probably
just
do
it,
but
I
want
to
include
some
other
pieces
that
that
we
haven't
had
on
there
like
config,
to
make
sure
and
also
whether
or
not
we
want
to
include
phases.
The
phases
update
as
part
of
the
it's
not
even
an
update
its
kind
of
restructuring
phases
to
be
underneath
an
it
as
part
of
the
contract
for
GA
I.
B
G
G
Just
recently
stumbled
across
the
situation
of
in
of
the
new
minimal
version
of
etcd,
which
was
required
by
the
latest
release
of
Cuba
diem,
and
for
that
version
there
are
no
RPMs
on
my
particular
Red
Hat
Enterprise
Linux
dairy,
but
it's
actually
an
Oracle
Linux
I
found
the
needed
version
on
on
centrist
repose,
which
are
normally
a
bit
ahead
because
they
have
this
open
source
input
in
there.
I
would
actually
assume
I'm
not
quite
sure
that
the
Red
Hat
Enterprise
Linux
doesn't
have
that
version
either.
G
So
if
we
are
recommending
to
use
an
external,
it
is
cd44
the
h8
deployments
via
tube
ATM,
then
it
would
be
actually
be
a
good
thing
to
provide
at
least
one
version
which
fits
there,
which
leads
the
required
minimal
version
so
that
people
don't
have
to
pull
a
lot
of
different
ripples
which
might
not
even
really
fix
to
what
they
are
using.
So.
B
Chuck
has
an
updated
docked
from
the
original
EDD
deployment
for
external
LCD,
which
uses
the
coop
idiom
phases,
sub-command
that
to
you
to
basically
stand
up
a
cluster
using
the
containers
similar
to
just
a
self-hosted
NCD,
that's
self
hosted
but
like
static,
manifest
austin.
So
that
way,
instead
of
you
having
to
manage
rpms
and
dead
packages
separately,
you
can
still
have
the
standard
artifacts
that
are
being
published.
It's
just
that
we're
using
the
phases
of
command
to
lay
down
the
containers
and
get
all
the
certs
and
secrets
and
everything
so.
B
G
F
F
G
A
G
A
We're
running
cubelet
on
all
nodes,
load,
master
and
terminals,
but
I
mean
I,
see
your
point.
We
could
always
we
have
a
couple
of.
We
could
say
always
the
only
recommended
way
to
run
a
CD
from
sagasta
lifecycles
perspective
is
hydrolyzed
images
that
one
two
we
could
add
Deb's
and
rpms
with
the
versions
we
know
are:
okay,.
B
A
B
I,
don't
want
to
get
into
the
business
of
trying
to
support
the
multiple
incantations
for
distros.
We
already
have
the
basic
stuff
in
place.
The
minimal,
but
the
MVP
of
yeah
container
images
are
better
I
agree.
I'll,
just
leave
it
at
that.
It's
a
lot
easier
for
us
to
manage,
because
we
already
have
the
build
apparatus
and
artifacts
in
place
for
everything
else.
So
we're
just
using
the
infrastructure
we've
created
to
stand
up
an
external
HDD,
so.
G
A
B
A
All
the
steps
to
generate
certificates,
what
how
to
write
the
cube,
ATMs
configuration
and
then
what
kubaton
commands
to
run
in
order
for
everything
to
to
work
smoothly
without
touching
Deb's
and
rpms,
because,
as
Tim
said
this,
the
only
reason
we
have
demson
rpms
for
a
cuban
deployment
is
basically
to
ship
the
cubelet
and
provide
a
semi
user-friendly
way
or
use
a
normal
way
of
installing
things.
But
it's
basically
you
could.
You
could
as
well
just
call
down
the
binaries.
Yes.
G
B
B
G
B
B
F
I
was
gonna,
say,
I
think
it
would
be.
There
would
be
a
lot
of
things
that
we
could
do
to
make
the
workflow
easier
if
the
if
an
CD
were
hosted
on
the
masternodes
as
well,
because
they
would
be
part
of
the
cluster
and
have
access
to
things
like
secrets
and
the
certificate
asserts
getting.
This
was
on
the
ground,
wouldn't
be
such
a
complicated
issue
anymore.
At.
G
H
A
Think,
let's,
let's
keep
it
from
now
that
we
validate
Chuck's
PR
against
standalone
cases
like
you
only
run
the
cubelet
as
your
system,
D
thing
on
a
master
or
on
a
machine
that
doesn't
have
the
API
server
running,
and
so
it's
really
external
and
then
in
the
end
nothing
prevents
you
from
having
that
machine.
The
exact
same
as
where
you're
running
cube,
API
server,
it's
just
an
IP.
You
can
refer
to
yourself
as
well,
but
having
that
completely
tested,
then
you
can
co-locate
them.
A
If
you
want
still
using
the
exact
same
and
then
in
the
grand
future,
we
might
explore
other
things
with
that's
Chuck
said
like
making
it
smoother.
If
you
have
access
to
to
things
like
local
certain
discs,
if
your
co-locating,
but
for
now
I
think
our
official,
the
six
official
recommendation
is
use.
Containerized
images
I
mean
containers
is
why
we're
all
here
kind
of,
and
then
you
go
with
external
HDD
for
now
looks.
A
A
A
E
From
my
understanding
on
the
issue,
there
seems
to
be
two
current
kind
of
workarounds
to
the
problem.
One
is
to
modify
the
system
D
drop
in
file
that
is
owned
by
the
cubelet
packaging
that
we
don't
currently
touch
in
in
Cuba
DM,
and
the
other
way
is
to
modify
a
config
map
for
either
cube
DNS
or
a
core
DNS.
E
When
we
do
the
DNS
add-on
configuration,
and
that
brings
up
kind
of
a
bigger
thing
that
I've
been
wondering
about,
because
there
are
other
situations
where
it
seems
like
it
would
be
helpful
if
we
were
allowed
to
modify
settings
that
feed
into
the
system.
D
drop
in
file
that
currently
trip
up
users,
because
this
gonna
be
the
first
or
this
isn't
the
first,
and
this
won't
be
the
last
issue
that
we
hit.
That
result
that
relates
to
cubelet
configuration
that
we're
currently
kind
of
pushing
off
to
the
user
to
kind
of
solve
for
themselves.
B
E
E
Yeah,
that
would
be
the
bare
minimum
that
we
can
do,
but
it
seems
like
this.
This
is
the
right
opportunity
to
bring
up
the
the
bigger
discussion
around.
Do
we
potentially
want
to
kind
of
rework
the
way
that
we're
handling
the
system
be
drop-in
to
enable
not
necessarily
modifying
the
drop
in
itself,
but
be
able
to
modify
environment.
A
Yeah
Thanks,
so
it's
kind
of
fun
because
we
we
really
do
need
to
modify
the
cubelet
configuration
soon
and
that's
one
of
the
blockers
for
us
to
go.
Ga
is
a
feature
called
dynamic.
Cubelet
configuration
right
now
we
depend
everything
like
we
depend
on
the
oh
wait.
I
feel
we
could
start
moving
right
now
as
well
anyway.
The
cubelet
dynamic
configuration
stuff.
Is
you
write
an
initial
file
to
disk
with
a
bead,
a
speck
of
the
qubit
config?
A
E
You
know
I
think
the
resolve
might
be
some
that
could
be
handled
through
the
dynamic
queue
below
config,
but
things
like
CRI
runtime
or
you
know
other
flags
like
that.
Are
things
that
or
even
the
what's,
the
other
one.
The
C
group
configuration
for
the
cubelet.
Those
are
things
that
need
to
be
set.
You
know
as
command
line
flags
and
aren't
exposed
through
the
config
sure.
B
But
that's
probable
completion
we're
this
particular
problem
could
be
resolved
via
that
way.
The
other
stuff
already
is
hard-coded,
at
least
from
the
the
drop
file
for
kuba
DM,
for
the
coolant
but
I
think
that's
like
the
bootstrapping.
We
have
the
initial
bootstrap
configuration
from
the
deborah
RPM
installation
and
then
from
there
dynamic
configuration
takes
over
for
all
the
rest
of
the
knobs.
So.
A
Yeah
the
currently
the
really
weird
problem
we
have
when
hard
coding
flags
in
drop-in
is
upgrades,
and
you
have
to
do
a
dance
around
upgrading
the
right
things
at
the
right
order.
Otherwise
things
will
break
that's
why
we
want
to
control
the
cubelets
flags
from
the
cube
at
end
point
of
view,
so
we
can
recommend
a
set
of
flags
for
110,
for
example,
for
a
cubits
use
without
touching
the
drop-in
right
now
touching,
the
drop-in
is
really
hard.
As
we
have
said,
we
shouldn't
cube
the
cube
at
him.
A
A
We
will
start
writing
this
young
file
with
the
desired
spec
and
update
our
system
leader,
pin
to
read
this
file
because
the
cubelet
component
configuration
actually
graduated
to
be
the
last
cycle,
so
we
could
start
this
transition
now
and
as
we
go,
we
could
then
have
the
fancy
add-ons
of
automatically
rebooting
the
configuration
and
modifying
it
at
runtime
in
your
cluster,
which
is
still
alpha.
The
dynamic
thing,
but
the
read
config
from
file
feature
is
now
beta
in
the
cubelet,
which
is
great,
so
we
can
stop
using
the
command
line.
Flags
well,.
A
B
E
B
A
E
Not
too
hopeful
for
that
cuz
I
just
got
finished
reading
through
that
entire
thread,
and
basically
they
would
have
to
write
some
custom
jiggery
to
be
able
to
track
requests
through
kind
of
the
entire
clustered
system
to
be
able
to
detect
loops,
and
there
really
isn't
a
standard
way
to
kind
of
do
that
right
now.
So,
okay,.
E
A
A
E
E
A
A
E
A
E
This
is
limited
right
now,
only
versions
of
OSS
that
a
running
system
D
resolve
D,
you
know
we
can
I
may
be
handling
it
in
the
packaging
and
just
laying
down
a
sensible
default.
There's
the
best
path
forward,
wherever
that's
the
default
kind
of
laid
as
a
default,
and
then
pre-flight
check
to
verify
that
it's
what
we
expect-
and
you
know
kind
of
give
users
guidance
on
how
they
could
resolve
it.
If
they're
kind
of
going
against
the
grain.
A
Okay,
I
I
really
need
to
read
on
what
what
options
to
pass
the
colonists
made
this
exist
and
will
try
to
execute
on
the
generic
get
back
control
over
the
config.
We
have
the
cubelet
I
think
now
that
the
component
configuration
is
beta
and
then
we'll
switch
the
accordion
s
by
default,
as
discussed
as
it's
going
GA
and
it's
better
than
call
n
cube
DNS,
you
know.