►
From YouTube: 20181106 kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
November
7th
2018.
This
is
the
standard
cube
ATM
office
hours.
We
got
a
pretty
packed
agenda.
Our
plans,
probably
for
today,
are
gonna,
be
all
about
the
code
slush,
which
is
coming
up
this
Friday
anything
else.
Our
auxilary
topics
will
probably
be
punted
until
you
know
after
we
get
out
of
freeze.
This
is
kind
of
a
short
cycle
and
we're
doing
a
major
push
to
get
to
GA.
So
you
know
anything
that
isn't
high
priority
on
that
list
of
getting
to
GA
is
probably
gonna,
be
booted
it.
A
C
C
C
C
A
C
A
D
A
D
A
Don't
I
don't
think
it's
really
sane
to
do
that.
I
do
recognize
that
there
is
a
unique
possibility
that
somebody
might
want
to
do
this,
but
the
primary
use
case
for
kubb
ATM
is
for
folks
to
have
a
standard
80
to
90%
user
story
for
most
operations.
The
idea
of
a
mixed
kula
configuration
is
nothing.
We've
ever
talked
about
supporting
either
and
we
don't
even
actually
have
dynamic
kibou
configuration
working
properly
right
right
now.
A
We've
we've
defaulted
to
the
same
initial
grab
the
config
and
do
it
on
initial,
join
that
we've
had
since
the
beginning,
and
it's
much
it's
a
much
safer
route,
so
I
think
it's
okay
to
remove
again
it
once
we
get
into
beta,
it's
actually
harder
to
remove
things,
it's
easier
to
add.
So
if
we
do
find
a
user
story
that
makes
sense,
we
can
easily
add
fields
but
like
once
we're
in
beta,
then
removing
fields
has
to
go
through
deprecation
policy.
F
From
my
side,
I
was
thinking,
I
mean
about
this,
but
maybe
I'm
missing
lots
of
context
here,
but
I
think
if
we
leave
it
like,
it
is
right
now
and
we
use
different
filter
gates
for
different
notes.
When
they
join
is
going
to
be
harder
for
us
to
upgrade
I
mean
it
can
open
up
an
ad
or
four
complete
different
area
news
clusters.
The
can
fill
an
upgrade
and
I
think
it
would
be
safer,
so
I
agree
to
remove
it
at
least
for
now,
and
then
we
are
for
the
earliest
history.
G
Yeah
I
just
joined
and
by
the
way,
but
seemingly
can't
get
my
camera
to
work
today
anyway.
I
do
agree
that
removing
feature
gates
is
better
right
now
because,
as
you
said,
we
can
easily
add
also
agree
on
the
point
that
well,
if
we
start,
we
maybe
don't
know
what
we're
getting
into.
If
we
let
all
the
nodes
like
say,
we
have
ten
nodes
and
all
of
them
have
different
feature
gate
configurations
and
then,
when
we
upgrade
it's
like
a
total
mess,
so.
D
H
E
A
H
G
G
A
Were
I
don't
want
to
stretch
the
limits?
Lets
me
ideally
you're
supposed
to
like
we've
we've
in
the
past
Lucas.
We've
stretched
the
limits
to
the
very
extent
of
what
what
policy
and
you
know.
Ideally,
we
have
the
PRS
Huck.
They
don't
need
to
be
completed
right
as
long
as
the
peers
are
up-
and
we
have
you
know
the
final
shufflings
in
progress
by
Friday.
That
will
be
that's.
What
matters
right.
Yeah.
G
C
Next
one
is
unified
the
control
plane
image
that
a
lot
specified
continent
image
that
is
used
for
the
counterpane
effort
and
for
web
proxy
as
well.
Now
this
field
is
a
string
and,
and
the
you
have
to
provide
the
image
repository,
image,
name
and
version
of
of
the
unique
unified
at
one
table.
Any
made
of
a
either
could
be
major
you
want
to
to
use.
It
was
disgusted
to
change
these
into.
C
A
I'm,
a
general
+1
to
this
I
think
it's
it's
a
good
thing
to
do.
I
think
the
users
will
not,
because
we
already
have
the
overrides
for
image
meta
for
other
things:
I,
don't
think
you
need
it
right
so
that
it
becomes
superfluous
and
all
you
need
is
the
yes/no
I'm,
gonna
use,
hyper
cube.
I
think
the
only
thing
the
only
people
that
would
standardly
use
this,
that
I'm
aware
of
or
the
cube
spray
folks
currently
and
also
kinda
use
it
a
same
kind.
A
You
still
have
the
override
for
registry,
but
you
it
will
default
the
image
name
and
tag
version
by
default
right
like
from
the
other
parameters.
So
all
this
does
is
it
doesn't
allow
it's
like
it
solves.
The
upgrade
problem
that
Ricci
was
talking
about
is
that
you
can
internally
inside
the
tool
it
can
update
and
Rev
for
you.
Otherwise,
if
you're
pinned
at
this
long
string,
there's
no
way
for
us
to
revision
it
right
on
an
update
and
upgrade
yes.
E
C
G
C
G
C
A
A
C
Maybe
there
is
a
misunderstanding
so
now
we
are
using
the
feature
gate
to
select
if
to
use
Core,
DNS
or
cubed
Ennis.
What
is
changing
now
is
that
we
are
using
this
structure
to
select
if
you
are
using
core
DNS
or
cube
DNS.
So
we
are
changing
the
way
you
can
choose,
but
you
can
still
chosen
what
I'm
discussing,
if
is,
if
we
are
going
since
we
are
introduced
in
this
new
way
for
choosing
which,
at
the
NS
a--
adult
at
the
one
you
want
is
what
we
have
to
do
with
the
old
way
is.
A
A
C
C
The
last
bit
about
API,
sorry
for
bugging
you
about
these
is
okay.
When,
in
the
first
prototype
for
the
add-ons
there
was
a
proposal
to
allow
to
add
s,
extra
arcs
or
also
for
add-ons.
So
basically,
there
was
an
extra
arcs
year
for
setting
des
tracks
for
DNS,
and
there
was
also
see
Ameristar
ours
for
the
proxy.
C
G
Yeah
proxy
doesn't
make
sorry
DNS
doesn't
make
sense
for
me
either
and
generally
should
be
said
that
we
really
want
to
use
ex
rugs
is
just
like
something
we
have
to
deal
with
at
the
moment,
because
we
want
to
migrate
from
facts
right.
We
want
to
move
the
components
config
as
soon
as
we
can.
G
Hence
the
the
proposal
and
the
effort.
I
I
did
the
last
title
with
the
cap,
so
this
is
just
like
a
stopgap
that
we
are
supporting
in
our
better
and
probably
also
as
we
go
GAAP
I,
because
Flags
is
coming
industry
a
but
I
hope
that
in
our
GAAP
I
sometime
will
be
able
to
support
specifying
components
config
as
well
yeah
in
some.
G
G
A
In
the
long
run,
we
should
use
that
as
a
promotion
policy
for
the
configuration
for
Covidien
right.
Yes,
yes,
yeah,
because
we
like
we
do
not
want
to
go
to
GA
having
all
this
Cooper
e
VA
configuration
like
we'll
go
to
GA
for
cube
ADM,
that's
fine
but
jump
the
configuration
you
don't
want
to
do
that
question.
A
We
could.
We
could
plumb
that,
through
by
default
and
that's
kind
of
the
current
plan
is
to
plumb
through
the
default
settings
for
hostname
overwrite.
You
can
fix
that
problem,
but
I
don't
know.
C
C
A
Can
you
can
use
templating
type
of
style
things
to
do
that,
though,
so
like?
If
you
use
things
like
reference,
the
downward
API
or
host
IP
you
can
you
can
pass
those
parameters
through
as
overrides
right.
So,
if
I,
if
I
wanted
to
pass
into
the
proxy,
the
hostname
I
can
use
host,
IP
or
other
either
the
down
bird
API
or
other
mounts
semantics
that
apply
to
everything
and
then
that
those
those
configurations
will
apply
to
every
proxy
in
the
system
and
allow
for
a
per
proxy
customized
configuration.
Okay.
C
G
I
would
instead
prefer
to
deliver
the
standard
solution,
so
we,
if
we
in
that,
if
that
means
we
have
to
walk
the
extra
mile
and
set
hostname
override
for
as
an
argument
as
a
command
line
flag
to
every
proxy
based
on
the
downward
api.
Then
we
mount
the
downward
api.
Turnin
box
set,
hope
them
all
right
and
not
let
the
user
specify
extracts
in
our
API
in
order
to
avoid
clutter,
because
it's
unlike
the
rest
of
the
control,
control,
plane
components,
you
can
even
I
mean
it's
fairly
straightforward.
G
A
I
like
that,
but
we
need
to
be
explicit
about
it
then
select
inside
of
the
proxy
like
in
the
configuration
we
should,
because
we
use
go
doc
now
for
the
configuration
file.
Okay,
we
should.
We
should
basically
specify
for
overrides
do
XYZ,
which
is
basically
Cupido,
apply
or
patch
apply
for
modifications
to
this
particular
add-on,
and
eventually
these
add-ons
should
be
removed
directly
from
qadian
proper
too.
So
that's
another
escape
hatch
yeah.
So
so
I
like
that
I
think
that's
a
good
policy
yeah.
G
I
think
I
mean
because,
because
that
is
already
a
solution
and
if
you're
so
advanced,
you
need
to
do
fancy
overrides
for
all
proxies
in
the
system.
I
think
extracts
is
too
little
for
you
anyway.
So
then
I'll
try
to
keep
it
out
of
the
beta
API
and
if
there's
a
super,
strong
argument
in
the
next
release,
we
may
revisit
okay.
C
H
H
A
A
D
G
G
Control
plane
go
yeah,
yeah
that
read
that
right
right
so
here,
if
you
enable
that
it's
gonna
create
this
audit
policy
file
with
a
like
base
level
thing
and
set
the
log
pad
to
just
some
something
under:
let's
see,
kubernetes
and
set
this
log
max-age.
So
that's
that
and
mount
the
volumes
so
I
mean
that's
fine,
but
I
I
wonder.
A
You
don't
need
to
like
the
the
the
way.
The
way
the
cap
is
working
through
for
dynamic,
auto
policy.
It
is
almost
like
mutating
webhooks,
where
it's
just
an
Arg,
you
specify
to
enable
to
the
API
server,
so
it's
just
an
extra
card
and
then
you
can
actually
apply
via
the
API
itself,
the
ability
to
set
and
mutate
policy.
So
you
don't
need
any
of
this.
Okay.
E
G
D
G
A
G
A
G
Yes,
I
I
investigated
it
during
the
summer
like
for
a
couple
of
days
or
so,
and
it's
super
complex
and
nobody's
even
started
working
on
it.
I'm
in
the
military,
I
was
thinking
about
starting
the
work
at
least
starting
a
cap,
but
I
didn't
have
time
before
I
joined
so
at
the
earliest.
I
would
expect,
like
this
summer
of
next
year,
for
an
alpha
version,
because
right
now,
API
server
doesn't
even
have
a
spec
right.
Now
it
the
whole
API
server.
Internal
configuration
is
made
up
by
options
structs,
which
have
ad
flags.
A
G
It
really
compatible
with
the
component
convicting.
Basically
so
so
there
we
have
a
problem
and
then
there's
also
going
to
be.
We
first
before
doing
the
API,
so
we
have
absolute
a
generic
API,
so
we
have
to
do
the
CID
API
server
and
all
the
things
the
API,
so
independent
I'll
be
reflected
first,
so
so
yeah,
that's
a
long
way
off
right
now
and
that's
why
we're
keeping
their
flags
it's.
A
G
Don't
I
don't
know,
but
I
can
say
that
much
that
I
don't
see
them
going
away
anytime
soon
cause
they're
GA
and
the
cubelet,
the
cubelets
components.
Config
is
well
tested
and
working
real
well,
but
it's
still
marked
as
better
and
it
has
to
be
at
least
marked
v1
or
GA
for
a
couple
of
releases
until
we
even
can
think
about
starting
to
remove
the
flags
due
to
the
deprecation
policy.
So
basically
it's
a
deprecation
warning,
and
it's
saying
this
is
not
the
like
end
goal
Direction.
G
C
Next
topic,
when
we
never
define
our
idea
that
you.
A
C
A
J
D
Just
want
to
point
out
that,
with
this
change
we
broke
kubernetes
anywhere
like
the
setup
there.
It's
basically
uses
terraform
to
the
poi
like
four
nodes
and
one
master,
and
we
may
Fabrice.
You
have
to
change
API
endpoint
to
control
endpoint
now,
so
the
possibility
here
is
that
we
might
break
existing
setups
with
this
change.
Just
so,
you
know.
J
D
C
Was
there
since
recycle
now,
I,
family
member?
Well,
the
problem
of
community
anywhere
is
that
it
was
using
improperly
dis,
Finland.
So
I,
don't
blame
humid
mean
for
this.
They
were
using
the
contour
plain
address
for
the
local
addressed
with
the
internal
address
of
Google
crowd,
and
this
address
was
not
accessible
from
outside.
So
it
was
a
problem
of
call
quality
of
user.
C
G
C
G
Then
then,
I
really
think
we
should
rename
the
API
endpoint
to
local
and
Appian
endpoint
or
local
API
server,
endpoint
or
whatever
cuz,
not
even
like
I,
have
worked
on
kubaton
since
the
beginning
and
I
didn't
have
context
on
this
one,
but
it
really
confused
me.
So
if
it
at
least
has
something
about
that.
This
is
only
the
cubelet
master
cubelets
to
master
api
server
locally
connection,
then
it
would
be
helpful
for
many
others.
I
think
how
about
local,
API
server,
end
point
or
local
API
endpoint.
C
G
G
C
A
Well,
why
don't
we
hold
on
the
rest
of
the
the
configuration
review
for
tomorrow?
We
will
do
like
a
final
pasture
together
and
then
right
now,
there's
other
things
that
do
need
to
get
addressed,
especially
with
regards
to
the
phases
modification
and
some
of
the
other
changes
here.
The
first
one
is
it
ok,
Lucas?
A
Yes,
we
can
talk
tomorrow,
go
through
a
final
audit
of
the
API,
the
Graduate
dynamic
could
be
config.
This
one
was
to
switch
that
phases,
work
and
live
a
mirror.
You
had
a
comment
on
there
about
needing
to
run
it
because
it
was
dependent
upon
the
other
phase
and
hiding
it
I'm
pretty
against.
That.
Is
there
a
reason
why
we
would
even
want
to
push
somehow,
even
if
it
was
hidden
push
it
into
the
other
phases.
I
mean.
D
D
Are
we
basically
forbid
to
integrated
a
phase
rather
which
which
respects
order
of
phases
we
now
when
now
graduating,
although
I
offer
phases
to
init
phases?
Okay,
so
we
are
deciding
if
we
have
to
have
dynamic,
complete
config
exposed
as
a
public
phase
where
the
user
can
invoke
it.
You
know,
or
if
you
want
to
hide
it
and
still
make
it
possible
to
be
executed
in
order,
but
only
controlled
with
the
future
need.
A
E
C
C
It
works
only
on
masternode
or
in
the
joint
encounter
planer
an
order,
because
it
requires
that
you
have
superpowers
in
order
to
set
an
order
to
use
the
dynamic
Kubrick
confit
a
normal
worker.
Node
does
not
have
enough
strength
to
set
the
dynamic
Google
accounting
for
itself
and
I,
also
peeing
at
someone
in
signal
dead,
and
they
answered
me
explaining
me
about
that.
This
is
the
state
and
it
is
not
going
to
change
so.
C
The
dynamic
will
a
config
feature
flag
and
in
the
dynamic
Google's
conflict
phase,
Romilly
in
it
and
the
joint
workflow,
and
instead
we
should
use
they
take
the
the
commander
that
now
is
under
quarantine.
Alpha
that
allows
you
to
set
the
dynamic
table,
configure
for
a
specific
node
and
basically
unit
your
cluster.
You
join
nodes,
then
you
go
back
on
the
master,
node
or
somewhere,
where
you
have
the
alpha
and,
and
you
run,
enable
dynamic,
cooperate,
coffee
and
it
works.
C
A
Of
like
that,
given
that
it's
been
broken
since
epoch,
the
the
problem
with
dynamic
of
the
configuration,
it
was
a
lot
of
promise
and
a
lot
of
headache
and
heartaches,
so
the
it's
never
actually
worked
properly
and
it
comes
with
a
security,
a
litany
of
security
issues
in
sort
of
other
issues
as
well:
I'm
I'm
fine,
if
it's
a
totally
opt-in
scenario.
So
basically
what.
C
D
C
G
C
G
What
is
it's
called
like
admission
controllers,
of
course,
but
it
can
register
with
stuff
but
not
update
and
as
we
as
far
as
I
know,
don't
have
any
register
with
dynamic,
cubelet
config
config
map
XYZ
option
on
the
cubelet,
which
would
then
be
required
for
order
to
work.
We
need
to
do
the
patch
and,
as
Fabrizio
said,
that
is
like
requires
admin,
so
I'm.
G
We
have
it's
a
it's
a
bead,
a
bead,
a
feature
gate
right
now.
Can
we
keep
it
in
the
in
its
flow
in
case
you
or
we
kind
of
have
to?
Can
we
keep
it
in
the
init
flow?
If
you
specify
the
future
gate
but
move
it
from
the
faces
away
from
the
faces?
Did
it
dedicated
cubed
I
am
all
for
command
and
deprecated
the
feature
gate.
I
D
I
G
Can
mark
yeah
yeah
we
can,
as
Tim
was
going
to
say,
yeah.
We
can
always
deprecated
a
be
the
feature.
Wait
for
a
period
of
time.
Then
move
like
downgrade
it
to
an
alpha
feature
in
the
meantime
and
then,
when
we
feel
ready
about
having
it
as
a
separate
sub
command
or
whatever
we'll
move
it
up
to
the
cubed
M
toolbox
of
other
commands
than
in
it.
That
you
can
use
in
the
general
case,
promote
and
make
a
node
get
start
using
cubed
convict,
yeah.
D
C
A
E
A
C
C
C
Basically,
if
I
create
a
cluster
with
wishmaster
I
do
in
it
using
a
Copernican,
older
kubernetes
version
and
then
I
do
a
grave
everything
works.
But
if
I
use
an
old
version
of
code
mean
and
then
I
do
well
plays,
there
is
an
error.
Okay,
I'm
going
to
investigate
history,
complete
the
work
on
the
VI,
but
I
have
a
background
question.
What
is
the
current
status
of
test
coverage
for
upgrades,
I.
C
G
A
Okay,
we
have,
we
have
a
hard
stop.
So
let's
take
this
too
slack
and
let's
plan
to
have
an
informal
conversation,
probably
early
tomorrow
too,
as
well.
There's
there's
a
lot
of
work
here
that
needs
to
get
done
and
we
can
start
conversing
on
the
final
changes
for
API
stuff,
there's
well
as
any
issues
that
we
have
on
slack.
So.