►
From YouTube: kubeadm office hours 2019-08-21
B
B
B
So
we're
in
first
okay,
so
a
quick
PSA
here
is
that
we
have
a
number
of
PRS
that
are
not
merging.
They
are
lgt
empty
and
approved,
but
there's
something
wrong
with
the
to
tights.
That
is
responsible
for
the
March
queue.
I
have
not
checked
what
the
problem
is,
but
it
seems
that
we
have
some
PRS
already.
B
If
this
continuous
I
guess
I
can
try
a
Scholastic
testing
to
see
if
anybody
knows
what
the
problem
is,
the
problem
is
another
PSA
is
that
we
merge
the
pr
that
updates
coordinates
one
one,
five
zero.
It
was
a
long-standing
PR
and
it
finally
merged
and
I
I
checked
the
signal
today,
and
it
seems
that
we
have
green
signal
for
the
1:15
to
master
upgrade,
which
master
currently
is
the
same
as
116,
because
the
branch
is
fast
forwarded.
This
is
great
news.
You
know
because
they
they
I
have
to
test.
Luckily,
it
works.
B
In
particular,
the
ipv6
support
in
kind
and
I
got
contacted
today
by
Antonio
from
Suzy
who
explained
the
problem
to
me,
but
I
don't
really
have
a
good
solution
and
he
said
that
he's
going
to
continue
investigating
eventually,
possibly
contact
the
core
DNS
maintained
as
I.
Don't
know
like
all
the
details
of
the
problem.
B
B
How
is
the
signal
for
this
sorry,
so
something
else
import
boss,
so
I
did
a
check
yesterday
and
now
it
seems
that
the
only
imports
remaining
that
are
problematic
for
us
IPV
s,
imports
we
imported
stuff
under
PEC
eg
for
some
of
these
activist
checks
and
also
we
continue
to
import
the
internal
q
proxy
a
couple
times
so
I
believe
phablets
will
put
a
hold
on
the
PR
from
Rasta.
About
that.
Do
you
guys
have
comments
on
this
good
thing
that
we
should
go
to
the
ball
game.
A
Well,
personally,
I
think
that
we
should
do
some
more
deliberation
over
this
one.
So
it's
obvious
that
probably
we
won't
have
any
ability
to
migrate
between
config
versions
of
component
convicts.
So
one
way
is
to
just
merge
that
PR
and
at
manual
migration,
or
simply
like,
generate
new
config
every
time
and
therefore
we
want
me
to
have
any
migration
code,
but
will
basically
overwrite
any
user
changes
to
rows.
C
For
the
checks,
I
think
that
we
can
rework
these
checks
and
being
from
some
of
these
into
criminals.
I
have
already
prepared
our
open,
so
I
can
rework
them
to
fit
into
the
community
spiritual
standards
and
then
use
these
checks
and
in
to
qadian.
So
basically,
when
we
end
up
with
some
calls
that
checks
the
kernel
version
loads,
the
I
previous
modules
are
necessary
for
Canaries
I.
Think
that
we
already
do
this
in
the
validators,
so
I
think
we
can
do
this.
For
the
moment,
I've
learned
also
to
attend
to
suggesting
tomorrow,
to
discuss
this.
B
Already
hides
their
internal
types.
For
that
you
know
the
kind
of
configuration
so
they're
like
the
package
is
really
internal.
So
what
we
did
is
we
forked
the
validators
and
they
are
here
and
also
the
photos.
So
basically,
but
you
know
in
the
case
of
kickboxing
code
like
this-
is
much
bigger,
but
we
basically
decided
to
create
forms
because
it's
a
similar
situation
kinder
is
like
a
wrapper
around
kind
and
if
we
start
using
only
the
public
types
of
proxy,
we
lose
the
validation.
We
lose
the
defaulting,
I.
B
D
To
problem
so
the
defaulting
is
not
a
problem
for
my
point
of
view,
because
the
full
team
will
be
applied
by
coup
ballot
or
hooba
proxy
when
we
pass
a
configuration
to
them.
So
I
will
my
suggestions
to
rule
out
the
routing
from
the
discussion,
but
we
have
a
problem
which
our
validation
and
migration,
so,
if
I
start
to,
let's
try
to
nail
down
this
program.
So
if
I
think
about
validation,
having
validation
is
a
nice
things
for
the
users,
because
we
can
give
to
the
user
as
seniors
before
they
start
the
clusters.
D
D
We
have
to
guarantee
the
cluster
that
rusticana
work
during
upgrades
and
and
this
a
problem
that
that
might
be
you
should
be
addressed.
We
can
consider
this
problem
out
of
the
scope
of
this
BR
because
and
get
this
pair
method,
but
at
least
at
least
I
would
like
to
see
miss
a
status
somewhere.
What
is
the
our
position
with
regard
to
validation
and
update.
B
B
C
B
Yeah,
so
a
very
generic
question:
do
you
guys
think
that
it's
a
good
idea
to
completely
remove
this
IP
V
a
check
from
qadian
it
creates
it
creates
such
a
complication?
We
now
fetch
the
Q
proxy
configured,
and
then
we
also
import
this
so
that
we
can
perform
the
validation.
I
think
this
is
what's
going
on.
A
I
think
that,
like,
if
you
specify
IPPs,
is
a
mode
for
keep
proxy
and
proxy
does
not
find
the
necessary
models.
It
will
actually
try
to
fall
back
to
IP
tables
and
if
it
doesn't,
if
it
can't
actually
do
IP
tables
either
it
will
actually
fall
back
to
user
mode.
So
this
isn't
a
fatal
error
and
I'm
not
sure,
like
I,
think
that
we
can
say
if
we
remove
this,
so
it
can
be
non
blocker
for
us
simply
because
key
box
can
actually
continue
serving,
even
if
I
previous
models
aren't
present.
B
B
Think
these
are
connected.
Actually,
these
the
you
Qi
PDS
in
the
proxy,
a
previous
and
also
by
the
way
one
of
these
packages
ends
up
importing
the
dhakkir
SDK,
which
is
crazy,
so
yeah.
We
should
definitely
stop
doing
them.
I
think
mostly
everything
else
is
okay.
We
had
a
thing
help
with
create
a
loophole
for
the
system
validators,
so
today,
I'm
also
going
to
basically
copy
our
validators
from
the
cube
ADM
packages
to
this
repository.
B
Here
we
have
a
bit
of
a
tricky
question,
because
when
I
spoke
with
Landau
from
Google,
who
originally
wrote
these
validators,
we
discussed
that
the
the
validator
repository
should
be
following
the
tagging
and
branching
of
KK.
It
is
very
tricky.
We
don't
have
a
publishing
bot
here,
so
I'm
thinking
something
in
the
lines
of
whenever
we
push
a
change.
We
should
talk
the
specific
commits
always
and
if
this
is
impacting
different
branches
of
KK,
we
should
always
branch
this
repository
as
well.
So
does
anybody
have
comments?
I.
C
B
Yes,
so,
but
this
creates
a
little
bit
of
a
strange
situation
because
imagine
that,
like
today,
we
are
going
to
push
at
arc,
but
this
repository
might
not
see
any
updates
for
the
next
couple
of
years.
Maybe
so
does
that
mean
that
the
next
stack
we
push
should
be
like
a
completely
newer
version
of
kubernetes
like
today?
If
we
push
attack
that
is
1/16
a
couple
of
years,
it
might
be.
You
know,
122
or
whatever
I'm.
B
Because
we
we
wanted
to
map,
we
wanted
to
map
the
versions
exactly
the
versions
of
kubernetes
qualities
because,
like
they
are
currently
using
comedian.
But
the
actual
truth
is
that
these
validators
are
mainly
what
were
mainly
created
for
the
north
end
to
end
tests,
which
are
strictly
chorus,
qualities,
bound.
B
D
B
D
D
I
guess
no
I
think
that
if
we
are
consistent
in
and
we
and
these
are
represented,
it
changes
according
to
semantic
passion.
So
major
changes
are
breaking
change
and
so
on
and
so
on
then,
but
mean
chronically
should
only
each
version
of
when
a
tissue
should
bend
or
have
their
shown
of
this
repository.
But.
B
B
D
Think
that,
at
this
stage
we
are
free
to
do
whatever
we
want,
but
sorry
I
mean
he
is
a
is
one
topic
until
this
this
repository
that
hopefully,
will
change
a
very
slow.
It
is
another
topic,
so
I
I
don't
see
the
need
of
managing
many
branches.
Many
many
many
tags
when
there
is
no
need
it's
just
to
simplify
the
management
of
this
repository.
If
this
repository
I
never
change,
why
I
should
I
don't
know
branch
every
coordinate
is
a
release.
B
Because
if
there
is
a
problem
with
master
of
this
repository
in
the
current
version
of
kubernetes
that
imports
this
repository-
and
you
want
to
fix
something
it
ideally,
we
should
fix
it
in
a
branch
and
not
in
in
master,
so
kubernetes
kubernetes
should
import
attack.
That
is
a
target
commit.
That
is
in
a
branch.
If
we
apply
a
fix
to
this
branching
system,
validators
cooperate.
A
certain
kubernetes
release
can
update
their
important
arc,
but
but
if
you
change
everything
in
master,
it
can
break
other
consumers,
but.
D
I
don't
maybe
I'm
not
explaining
and
I,
don't
want
to
block
the
meeting
on
this
point.
I
think
that
it
makes
sense
to
apply
a
pair
trying
to
disable
it
or
it
does
not
make
it
so
everything
branches,
favorite
for
release
and
so
on,
but
it
does
not
make
sense,
apply
the
same
version
II
for
kubernetes.
D
C
D
If
I
got
it
this
right,
when
we
have
a
breaking
change
you
since
you're,
following
the
following
semantic
versioning,
you
have
to
create
a
new,
a
new
release
and
a
new
branch.
So
some
computer,
some
consumers,
can
continuing
to
consume
at
the
older
branch.
And
then
you
consumer
can
move
to
a
new
branch.
B
E
From
from
my
point
of
view,
I
I'm
pretty
new
to
this
issue,
so
I
see
advantages
in
both
solutions
like
if
we
have
our
own
personal
schema,
then
of
course,
if
qadian
bender
seat
anchor
has
been
your
seat
as
well,
then
we
need
some
kind
of
super
matrix
as
Peter
said,
because
if
comedian
upgrade
system
validators,
then
we
have
to
upgrade
that
on
queries
as
well,
and
we
need
to
say
ok
if
you're
using
queries,
pressing
whatever
the
system
validator
that
applies.
Is
this
one
or
the
other
way
around?
And
if
we
are
using
the?
E
If
we
are
mimicking
the
cameras
releases,
then
I
see
another
program.
That
is
what
happens
if
we
forget
to
push
a
new
branch
here
when
we
are
about
to
release
a
new
canaries
version
and
then
when
they
bump
they
go
modern
cuban
artists.
It
fails
because
the
branch
is
not
there
yet
so
I
see
also
that
issue
as
well,
because
since
it's
it's
manual,
it's
pretty
hard
to
keep
up
yeah.
B
I
cannot
so
the
question
like
my
idea
originally
was
to
basically
you
know,
push
changes
to
this
repository
if
a
change
applies
to
if
a
change
is
affecting
conversion
of
communities,
because
essentially
we
are
validating
kubernetes
nodes.
With
this
repository,
my
idea
was
to
create
a
branch
only
if
needed.
This
means
that
we
can
push
one
thirteen
branch,
but
if
we
don't
have
any
updates,
we
can
only
we
can
push.
You
know
like
117
skipped
like
four
releases,
so
the
branches
for
this
are
going
to
be
missing.
I
mean
that
was
my
idea.
E
That
can
be
really
I,
don't
know
expectation
from
the
comments
from
that
issue,
because
I
think
it
will
be.
We
read
that
go
mod
on
Cuban
Aires
will
import
something
or
requires
something
that
is
113
when
it
see
117
I,
think
that's
going
to
be
a
little
weird,
and
in
that
case
maybe
we
just
want
our
numbers
in
the
schema,
in
that
case,
I
think,
but.
B
But
we
can
still
talk,
you
know
mighty.
We
can
skip
both
branches
and
tags.
So
you
know
the
114
tag
is
not
going
to
exist.
The
branch
cannot
is
not
going
to
exist,
but
if
we
push
on
117,
this
is
going
to
match
the
volition
of
kaykai.
The
branch
is
going
to
exist
here,
code,
release,
117,
I,
guess
and
then
KK
can
use
this
particular
tank
in
the
go
mod
file,
so
they
care
they
can
upgrade
the
go
mod
file
from
113
to
117
skipping.
B
D
D
D
So
it
is,
it
is
the
same
for
every
other
project,
so
this
project
evolved
by
its
own
life.
It
is
up
to
the
consumer
choose
which,
which
version
are
which
brain
should
consume,
but
by
the
way
I
don't
want
to
block
in
this
I
I
think
that
we
should
stick
to
the
same
version
in
schema
that
all
old,
the
other,
let
me
say,
project
that
which
are
out
of
kubernetes
not
using
well
I,
don't
I,
don't
see
a
reason
for
inventing
the
wheel
yeah.
B
But
look
if
I
ask
you
what
what
is
the
current?
You
know
coastal
API
vSphere
provider
compatible
with
the
cluster
epi
repository.
Can
you
can
you
say
like?
Can
you
list
the
compatible
versions
all
compatible
shares,
but
it
takes
like
that
demand
virtual
matrix
in
my
opinion,
like
every
single
change
can
break
so
that
way.
That
is
why
I
was
thinking
that
we
should
use
the
existing
versions.
I.
B
A
B
Ok,
so
this
is
another
topic
that
we
came
back
channel
I
set
up
got
to
print
the
Secretary's
for
errors.
A
the
idea
was
to
only
print
the
secretary
if
the
user
provider
came
up
level
more
than
five
or
equals
five.
So
we
had
a
long
discussion
here
about
whether
this
is
the
right
thing
to
do
and
I
think
the
the
the
couple
of
remaining
options
were.
We
can
mean.
According
to
me,
the
two
available
options
is
to
dump
factories
for
all
errors
to
the
user
screen.
B
B
Maybe
one
day
when
we
start
supporting
like
a
clear
definition
of
what
is
an
alpha
and
a
pre-release
built,
we
can
start
applying.
You
know
go-go
linker
flags
that
can
allow
us
to
define
this.
You
know
alpha
builds,
can
always
have
the
verbosity
is
sorry.
The
stack
traces,
but
currently
the
committee's
release
process
is
so
complicated
that
we
don't
have
a
way
to
configure
this
I
think
I.
Think.
E
My
boat
is
really
I,
don't
have
a
strong
opinion
here.
So
it's
just
my
my
my
thoughts
here.
I
would
vote
for
always
printing
the
stack
trace,
but
if
that's
a
problem
for
some
users
that
get
afraid
of
that
bright,
that
in
some
temporary
file
that
we
say
here
is
the
temporary
file
that
you
can
copy
and
paste
on
your
back
report.
If
you
want
that,
that
will
be
my
ideal
option,
because
I
find
a
little
weird
to
make
trace
with
the
velocity
level
trace
with
back
traces.
I
find
it
a
little
weird
yeah.
B
A
quick
comment:
basically,
you
know:
when
a
command
fails
for
a
user,
we
usually
tell
them
to
add
the
velocity
level
to
increase
it
so
that
we
can
sear
more
more
data,
more
output
by
telling
the
users
to
increase
the
verbose
level.
We
are
now
also
going
to
see
a
stack
trace.
It's
really
a
developer
feature.
I
think
the
stack
trace
is
a
much
is
the
description
for
the
or
I
added
a
link
to
the
document.
I
think
it's
in
the
pierre
itself,
so
basically
the
the
level
of
here
it
is.
B
E
My
my
only
comment
and
again
I'm
not
blocking
this
so
feel
free
to
to
move
on.
My
only
thought
on
this
is
that
it's
better,
if
you
get
from
our
very
first
time
that
they
issued
a
user
opens
to
us,
they
provide
the
full
back-trace,
even
if
they
didn't
run
that
with
or
with
high
velocity
level.
Diversity
level
is
like
I'm,
going
to
dump
lots
of
information,
while
lots
of
things
are
happening
and
you're
going
to
see
absolutely
everything
that
I'm
doing.
B
B
A
B
B
E
B
B
B
Okay,
so
I
don't
see,
I,
don't
see
Arvin
Arvin
there
on
the
call,
but
I
saw
that
all
the
cube.
Sorry,
the
IP,
the
doors
like
they
are
solid,
emerging
comedian,
I,
don't
know
if
he's
going
to
continue
to
work
on
the
another
phase
too,
but
you
know
for
the
recording
we
merged
the
like
the
primary
phase
for
those
like
incubate
iam.
So
any
other
topics
we
can
look
at
the
pending
PRS
and
backlog.
A
B
B
A
B
Yeah
probably
touches
the
thing,
but
I
don't
see
sorry,
it
touches
our
tests,
our
end-to-end
tests
in
under
test
/
key
Bay,
diem
and
end-to-end,
but
if
the
change
is
only
cosmetic
is
basically
changing
this
import.
So
it's
fine.
This
is
currently
blocked
on
top-level
approvers
and
testing.
Yes,
okay,
so
Aaron.
C
B
B
So
this
is
something
that
Google
sent
its
appear
to
update
a
certain
way.
We
call
Jeff
PC
from
comedian
yeah.
We
basically
added
this
with
block
and
this
is
booked
on
a
top
level
approver.
You
know
she
knows
who
has
to
pass
a
be
a
machinery
review,
but
it's
fine.
We
had
a
import
by
the
way.
I
just
wanted
to
add
another
PSA
here
that
import
boss
is
quite
body.
There
are
a
number
of
bugs
in
import
boss.
B
B
Yeah
so
I'm
going
to
close
this
pier
by
the
way
and
for
the
recommendation
from
Fabrizio
instead
of
checking
if
pots
are
running
I'm
going
to
deploy
a
test
demon
set
and
see
if
it
deploys
during
upgrade-
and
that's
going
to
be
a
pre-flight
check
from
this.
My
only
concern
with
that
is
what
we
have
to
make
sure
that
this
demo
set
is
going
to
deploy,
despite
the
pots
quality
policy
in
place,
I'm
going
to
explain
the
experiment
quickly
with
the
default
policy.
B
B
B
D
B
B
B
B
D
B
B
B
B
B
D
B
D
And
at
the
top
of
the
document
we
write
these
exported
for
third
seanix
and
wha-hey
for
other
version.
Look
at
the
restaurant,
yes
disco
this
okay,
so
so
so,
basically,
the
only
PR
for
government
that
we
have
to
do
is
related
to
the
to
the
stage
and
possibly
doing
this
cleanup
that
we
are
suggesting
yeah.
B
B
B
B
I
think
join
is
going
to
work.
Plate
is
going
to
fill
out
the
face
of
the
stage
where
and
we
are
trying
to
upgrade
the
King
proxy
album.
B
B
This
is
a
ticket
for
tracking
the
ID
CD
upgrade.
So
this
is
what
this
is.
Where
was
an
interesting
topic
just
as
a
explanation?
We
have
three
minutes
left,
maybe
to
share
the
knowledge.
What
we
found
is
that,
if
you
provide
a
certificate
authority,
bundle
to
cooperate
this
to
solve
it,
to
keep
a
DM
as
an
external
CA
bundle.
B
D
B
Yeah,
so
so,
basically,
there
was
a
problem
also
here
that
the
user
did
not
have
a
certificate
authority
inside
the
bundle.
One
of
the
certificates
were
just
not
playing
client
certificates
for
server
to
connect
to
some
server,
and
this
certificate
immediately
breaks
Cube
ADM,
because,
first
of
all,
if
something
is
a
CA
bundle,
it
has
to
be
say,
bondo
and
me.
Furniture
agreed
on
that.