►
From YouTube: 20180926 kubeadm office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
For
beats,
he
already
has
the
original
doc
and
he's
already
going
right
now,
so
I
think
what
we'll
do
is
we'll
use
his
checklist
that
he
already
has,
along
with
the
original
documents.
There's
gonna
be
some
minor
modifications,
just
like
what
happened
to
be
one
out
of
three
but
I
think
we
already
have
the
docs
in
place.
A
A
We
made
some
last
minute
chain,
not
last
minute,
but
we
made
some
changes
near
the
end
of
the
code
freeze
last
cycle,
because
we
we
didn't,
have
a
solution
to
some
of
the
h.a
concerns
right,
so
I
think
creating
a
doc
with
for
beat
seal,
because
we
already
have
Minnie
ducks
and
then
use
his
current
checklist
issue.
That
is
open
right
now,
because
there's
already
known
things
we
need
to
do
and
then
we'll
have
to
like
I'm
sure,
there's
gonna
be
minor
stuff.
We're
gonna
have
to
fix
them,
but
we
find.
B
A
B
Yeah
I
was
thinking
about
the
add-ons,
because
you
know
I
have
a
proposal
about
how
we
should
implement
the
add-ons
in
a
in
a
way
that
the
users
can
modify
the
modify
the
manifest
for
the
others,
and
the
question
here
is:
should
we
do
this
for
the
beetle
config
or
perhaps
we
should
delay
this
until
somebody
starts
investigating
the
Google
solution?
I.
A
What's
his
name,
one
of
your
cameras
were
gone
crazy,
so
I
was
kind
of
losing
it.
There
I
think
writing
up
a
proposal
for
it.
I,
don't
think
it's
a
blocker,
because
I
think
you
can
probably
implement
those
things
independently
right
for
the
configuration
changes
or
you
should
be
able
to
I.
Think
the
config
changes
of
separating
out
the
component
tree
for
lack
of
a
better
word
is
probably
the
highest
priority
right
so
that
it's
into
its
separate
distinct
pieces.
B
B
A
We
just
have
to
make
sure
that
there's
consistency
across
that
so
I'm
not
terribly
concerned,
but
I
think
it
is
worthwhile
to
open
up
an
issue
and
investigate
that,
because
I
think
the
UX
experience
was
much
cleaner
to
have
the
the
idea,
at
least
of
having
a
bundle
versus
having
the
separate
configs
all
kind
of
separated.
It
would
be
a
lot
easier
for
most
people
to
grukk
well.
B
A
The
next
one
is
moving
phases
to
beta
and
getting
rid
of
the
the
alpha
phases.
Subcommander
least
a
half
a
portion
of
the
pieces,
I,
don't
think,
there's
any
concern
there.
I
think
we
do
want
to
have
the
split
where
we
want
to
still
have
phase
the
sub
command
for
phases
in
the
reference
for
the
individual
sub
commands
to
kind
of
Co
reference
each
other,
so
like
you'd
want
to
medium.
You
know:
you'd
want
to
be
able
to
use
the
sub
command
underneath
init
and
join
as
well
as
in
phases.
A
So
that
way,
a
person
could
execute
the
individual
phases
separately.
So
they'd
have
a
unified
way
to
view
the
stages
that
we
go
through.
I
think
the
problem
we
currently
have
and
I
think
what
we
need
to
start
or
solve
in
this
cycle
is
to
have
a
clearly
defined
step-by-step
set
of
what
are
all
the
phases
and
what
are
the
order
in
which
they
occur,
like
even
having
a
sub
command
for
phases
that
outlines
this
is
the
order
that
it
typically
occurs
in.
B
Yeah
I
already
revealed
the
cap
for
Fabrizio
about
phases.
We
discussed
some
ideas
in
there
like,
for
instance,
we
discussed
the
should
we
pass
tribute,
support
phase
command
flags
to
be
passed
to
the
separate.
For
instance,
you
can.
You
can
call
Q
baby
a
minute
and
list
a
bunch
of
phases
that
you
want
to
execute
and
we
were
discussing
if
you
yeah,
we
were
discussing
if
you,
if
you
should
be
able
to
also
pass
the
flags
to
sub-command,
see
he
family
to
explain
to
me
that
that's
not
a
good
idea.
A
C
A
Alpha
command,
it's
trying
to
find
a.
We
need
to
find
a
final
home
for
how
we
want
phases
to
work
right
and
for
me,
Tia
has
a
current
proposal
in
place,
but
like
all
proposals,
they
sound
good
and
we
put
it
on
paper
and
we
write
it
down
and
then
we
start
using
it.
And
then
you
know
we
kind
of
change,
what
we're
Ridge
initely
thinking.
B
So,
for
bridge
was
currently
kept.
You
can
have
a
look
at
posted
the
link,
so
he
gives
some
examples
like
you.
Can.
You
should
be
able
to
calculate
him
in
it
and
provide
like
us
a
check,
a
list
of
the
phases
you
want
to
execute
and,
for
instance,
you
can
call
the
service
face
the
bolstered
face,
whatever
the
complete
face
and
from
there
we
have
to
handle
all
those
situations
where
like,
for
instance,
if
the
user
decided
to
execute
a
face.
That
is
completely
out
of
order
and
if
I
see
Fabrizio.
B
D
D
There
are
many
many
feelings
conflated
under.
They
could
mean
alpha
phases
umbrella,
but
originally
phases
where
a
chunk
of
the
unit
or
or
other
cubed
meaning
work
flow.
So,
for
instance,
if
we
talk,
if
you
think
about
the
renew
command,
which
is
now
a
phases,
is
infinitely
in
turn,
remember
well
the
the
if
I,
remember
well
there,
a
new
command
technically
is
not
a
phases;
it
should
be
posted
in
on
new
chromosome.
B
D
D
B
A
So
long
as
it's
not
right
now
anything
that's
underneath
alpha
phases.
We
can,
we
can
nuke.
We
have
that
right,
but
anything
underneath
and
join.
We
do
not
have
that
right,
so
we
have
to
make
sure
that
we
mark
is
deprecated
anything
that
we're
going
to
shuffle
around,
but
we've
been
pretty
explicit
with
the
top-level
sub
commands
about
what
their
capabilities
are
so
I
think
we're.
D
A
A
And
I
can
start
looking
at
it
now
I'm
held
up
on
the
112
release
for
other
work,
so
I
can
start
looking
at
the
proposal,
hopefully
this
afternoon,
in
between
reviewing
some
of
these
other
PRS
that
are
kind
of
flying
around
that
doeth
circles
for
lifecycle,
so
phases
of
Pete
is
a
p0.
It
figures
at
p0
for
this
release.
Those
are
the
two
most
important
things
we
have
to
do.
I
probably
put
ad
on
management
with
bundles,
probably
as
a
p2
to
be
honest,
I
think
it'd
be
nice
to
have
it.
A
It's
not
a
requirement.
One
thing
I
want
to
put
is
it
was
like
a
p1
level.
Priority
is
packaging
and
to
have
like
to
stop
doing
this
split
packaging
and
to
actually
not
even
do
the
basil
inge
of
the
packaging
but
actually
have
spec
files
and
just
have
basil
build
the
artifacts
from
from
the
details
of
the
spec
and
the
Debian
packaging
is,
if
you
put
all
the
stuff
underneath
the
main
repository
and
all
artifacts
come
from
that
source.
A
A
Think
it's
irrelevant
you
could,
you
could
execute
Basel
or
make
for
the
multi
arch
support
and
it
shouldn't
matter
so
long
as
the
spec
file
or
the
Debian
packaging
stuff
is
all
there.
You
should
be
able
to
build
it
independently
and
I.
Think
there's
a
minor
problem
in
that
we've
served
this
RPM
spec
on
a
generation
piece
inside
of
the
Basel
stuff
which
actually
doesn't
help
us,
because
then
it
puts
that
hard
dependency
on
basil
itself.
A
If
we
just
unwrap
the
spec
and
just
have
the
generation
of
the
artifacts,
which
is
just
gonna,
call
a
container
to
call
build
roots
on
these
artifacts
right,
it's
irrelevant,
like
you,
could
use
make
or
basil.
As
long
as
you
have
the
details,
you
have
input/output
but
I,
don't
let
the
release
Reapers
doing,
isn't
it.
That
is
what
the
release
repo
is
doing,
but
having
a
canonic
having
it
be
in
the
main
line
in
disentangling
everything
else,
because
right
now
we
have.
A
A
I
think
that
completes
the
problem
because
then
people,
the
problem
we've
had
with
basil,
is
that
it
doesn't
have
it
can't
cross,
compile
like
arm
artifacts
because
of
the
Segoe
problem
now
which
no
one
has
actually
point
be
an
issue
I've
just
it's
magic
hand,
wavy
Google,
isms.
That
says
that
CEO
doesn't
work
with
this
and
I.
Don't
have
any
idea
what
the
details
are.
I
get
it
so,
but
I
do
know
that
we
can
do
everything
with
make
and
the
make
and
build
system
is
already
all
there.
B
I
got
a
simple
question
like
because
I
don't
understand
the
release,
tooling,
very
well:
we
and
nobody
does
yeah
except
for
Caleb.
So
what
we
do?
We
push
the
depths
and
rpms
to
the
Google
repository
and
my
question
is:
do
we
want
to
eventually
stop
doing
that
and
like,
for
instance,
if
I
go
to
the
Basel
website?
Basel
looking
at
this
I
was
a
tool
that
has
a
binaries
for
download
I,
see
they.
They
have
a
binary
installer,
so
you
can
download
the
binary
installer.
B
A
I
wanted
to
originally
I
talked
many
times
about
building
the
grand
unified
container
for
eBay
DM,
and
we
had
pieces
of
this
in
the
past,
where
you
don't
even
care.
There's
the
container
contains
all
the
details,
and
you
just
you
know
you
execute
a
single
command
into
the
container
and
whether
in
a
Debian
host
or
you're
on
a
a
redhead
ash
style
host
with
rpms,
it
would
just
install
the
unit
file
and
the
other
details
and
the
entries
in
the
package
management
system.
A
The
problem
with
that
is
no
one
wanted
to
take
ownership
of
that
and
people
are
used
to
consuming
stuff
from
Rp
and
Deb's
in
traditional
fashion.
So
we've
always
maintained
that,
but
I
do
think
it's
been
an
albatross
for
many
many
releases
now
and
I
would
like
to
get
I.
Don't
exactly
know
what
the
date
is
for
a
non-existent
watch.
I,
don't
exactly
know
what
the
date
is
for
the
release
for
113
is.
Does
anyone
know
the
actual
release
date.
A
C
Go
ahead,
oh
I
was
gonna,
say
I.
Think
there's
something
going
on
right
now
in
the
release
world
about
the
release.
Repo
I
haven't
been
tracking
that
work,
but
they
seem
to
not
be
updating
the
release
repo
and
they
are
updating
the
basil
bills.
So,
for
example,
the
cried
tool
stuffed
alien
basil,
but
not
in
the
release.
Repo
I'd.
B
Be
monitoring
the
release
process
very
closely?
No,
it's
not
an
assignment
for
me.
I've
been
doing
it
in
my
personal
time
and
I
have
to
say
that
every
single
change
you
try
to
do
into
the
release
to
link
you
might
encounter
a
lot
of
PR
reviews
to
be
how
to
put
it
to
you
might
face
some
workers
there.
So
it
is
going
to
be
a
very
slow
process
and
I
have
concerns
that
this
is
normal
for
130.
B
B
A
G
No
not
yet
it's
it's
almost
like
right
now
we
are
just
trying
to
get
the
feel
of
the
land,
and
this
was
the
release
where
we
were
trying
to
figure
out
where
things
are,
starting
with
the
images
and
the
release
artifacts
and
things
like
that.
You
know
we
had
this
question
about
the
depths
in
the
RPMs
where
they
live
and
how
they
get
pushed
and
the
so
far
it's
just
been
discovery
we
still
have
to.
G
We
have
to
start
filing
caps
soon
enough,
but
then
we
need
people
to
work
on
that
too.
So,
if
anybody
is
interested
in
that
kind
of
work,
it
definitely
making
sure
that
the
things
in
the
release
repository
and
the
main
KK
repository
I
need
to
be
synched
up.
Somehow
I
just
added
a
comment
in
the
PR
for
the
CRI
to
saying
that
it's
not
enough
just
to
do
it
in
KK.
We
have
to
do
it
in
K
release
too
so
yeah,
no,
not
not
good.
G
G
Yeah
there
is
a
bunch
of
us
who
signed
up
to
say:
okay,
we
try
to
do
something.
It's
we
do
not
have
like
a
normal
cig,
lead
or
anything
like
that.
Yet
so
it's
just
whatever
each
person
is
doing.
We
are
trying
to
take
notes
and
log
issues,
and
things
like
that.
So
that's
where
we
are
right
now.
It
would
be
good
to
have
like
a
formal
definition
of
what
this
thing
is,
whether
it
is
a
work
group
or
a
cig
and
how
we
should
so.
G
G
G
Right
this
this
means
we
found
more
information
because
Doug
M
was
driving.
The
you
know
was
pushing
the
buttons
on
the
release
scripts
themselves,
so
we
were
able
to
uncover
a
lot
more
things
than
we
usually
do
when
things
are
behind
the
scenes
and
the
the
build
of
the
RPMs
and
the
depths.
Also,
we
were
able
to
make
some
progress
because
you
know
we
got
some
information
from
Caleb
towards
the
end
saying
these
are
the
commands
that
we
use
and
how
we
do
it.
A
G
A
The
next,
the
previous,
the
next
one
on
the
list
is
test
automation,
kubernetes
anywhere
is
got
to
die
with
fire
and
I
want
to
replace
it
with
the
work
that
we're
doing
for
cluster
API,
because
we're
using
cube
ADM
directly
for
the
MVP
we
do
need
to
set
up.
G
agree
with
that
and
I
do
want
to
have
automated
HJ
deploys
as
part
of
the
testing
for
Kouba
DM.
A
We've
never
done
this
before
this
will
be
a
first,
but
it
will
put
I'm
basically
signing
Jason,
Liz
and
Chuck
all
up
into
the
hot
seat
for
this
one,
to
get
this
in
place
so
to
basically
set
up
the
test
apparatus
in
G
agree
so
that
we
actually
have
CI
signal
from
the
cluster
API
and
VP
that
they're
currently
working
on
to
deploy
Kubb
ADM
latest
on
a
the
test.
Degree
yeah.
A
G
A
A
B
So
I
see
a
big
problem
at
this
grid
is
a
most
of
the
tests
run
on
Google
infrastructure.
There's
what
you're
wrong
with
that,
but
if,
if
it
can
be
5050
with
AWS
I
think
we
are
going
to
have
much
better
coverage.
So
I
wanted
to
ask
something
else
here
about
either
ways
because
I
haven't
used
it
for
a
very
long
time.
Do
they
have
like
free
accounts
because
Google,
you
know
you
can
have
cheap
virtual
machines
and
stuff
in
it.
What
is
the
state
of
AWS?
Beautiful
can
I
just
say
that
is
we.
A
Don't
think
it's
something
that
you
should
be
able
to
get
all
the
signal
Auto
hooked
up.
We
have
to
talk
with
the
testing
for
folks
that
get
the
credits
from
the
CN
CF,
but
this
should
not
be
on
anyone's
dime.
This
should
be
part
of
the
test,
automation
and
the
credits
that
are
donated
to
the
CN
CF
they're,
a
platinum
sponsor
and
all
that
test.
Automation
should
be
jig
through
there
and
it
would
be
a
periodic
job,
not
a
not
a
blocking
PR
job
I.
Also,
there's
there's
other
issues.
A
B
A
So
that's
kind
of
a
goal:
this
cycle-
I-
don't
I-
want
to
talk
with
the
coop
spray.
Folks,
if
they're
going
to
default,
the
cube
idiom
deployments
I
mean
if
we
want
to
have
a
GCP
deployer
that
kind
of
covers
across
the
cluster
lifecycle.
I'd
I
would
ideally
like
it
if
they
would
have
a
periodic
job
to
replace
kubernetes
anywhere
for
GCP
to
have
a
deployer.
For
that
that
would
be.
You
know
this
would
be
like
a
super
nice
to
have
if
they're
going
to
default.
B
Well,
that's
for
my
perspective.
That
will
be
like
efforts
on
another
front
next
to
the
work
with
the
coastal
yeah.
My
idea
was
to
Patrick
abilities
anywhere
as
much
as
possible.
So,
first
of
all,
what
is
do
you
have
an
estimate?
Can
we
get
this
like
after
the
first
part
of
the
cycle
or
the
second,
before
I'm.
A
Gonna
put
this
as
a
hard
deliverable
after
we
after
we
evaluate
post
MVP,
which
is
gonna,
be
like
a
week
fish
from
now.
So
after
that
this,
this
is
like
the
highest
priority
test
item
other
than
a
couple
of
feature
additions
that
people
have
requested
with
regards
to
variants,
because
we
need
signal.
B
So
I'm,
probably
gonna,
stick
with
patching
cooperate
is
Jenny.
One
for
low
I
have
an
example
where
kubernetes
in
over
is
currently
green,
but
I'm
seeing
a
failure
in
the
couplet
in
the
master
branch.
So
the
signal
is
motivating
correct
correctly
because
it's
probably
to
the
old
docker
versa,
which
is
112.
B
A
That's
another
conundrum,
with
some
of
the
deployment
of
kubernetes
anywhere
right
is
that
we've
already
moved
the
stack
needle
and
we
need
to
update
all
the
dependencies
of
living
the
stack
and
it
the
maintenance
burden.
There
is
annoyingly
high,
especially
with
regards
to
debugging,
so
I
I
want
to
bypass
all
of
that,
because
we
need
to
get
the
other
signal
up
in
place
for
other
reasons.
So
getting
that
signal
in
place
would
be
beneficial
to
Hep
C.
Oh
the
community,
everybody
because
we
don't.
You
would
have
H
a
signal
currently
yeah.
E
A
B
B
Basically,
the
situation
there
is
that
I'm
saying
that
core
DNS
as
a
feature
gate
right
now
serves
a
good
purpose
for
us,
because
with
a
boolean
fact,
we
can
pretty
much
toggle
between
cube,
DNS
and
coordinates.
So
my
proposal
was
that
we
don't
remove
the
feature
gate
until
cube.
Dns
is
deprecated
completely
or
or
removed.
Well,.
A
We
should
at
least
rename
it
like
right
now.
It
should
be
a
feature
gate
to
enable
Kubb
DNS,
instead
of
like
inverting
the
defaults
for
the
feature
gate
to
be
on,
because
that
doesn't
make
any
sense
right
so
cleaning
up.
That
is
a
general
thing.
We
should
do
right
because,
right
now,
it's
like
you
don't
feature
gates
are
typically
an
opt-in.
You
have
to
explicitly
specify
them
to
hop
then,
but
right
now
it's
defaulted
to
true
or
yes,
so
we
should.
A
B
A
A
We
don't
usually
do
that
very
often
and
I
know
that
we're
trying
to
work
out
the
semantics
there,
but
I,
don't
think
many
people
are
going
to
care
about
the
legacy
version
of
of
the
feature
gated
flag
across
the
release
cycle,
because
right
now
it's
inverted
like
I
mentioned
before
it's
weird
can.
G
A
Well,
that's
not
necessarily
true.
What's
happening
is
like
people
are
running
scale
tests
and
finding
out
a
bunch
of
weird
issues,
and
it's
basically
defaulting
in
every
new
technology
that
gets
too
faulted
on,
requires
a
maturity
cycle
and
referred
to
find
what
are
the
same
defaults
that
work
across
the
widest
swath
of
things
and
they
find
in
fixed
scale
bugs
and
it's
just
a
process.
A
G
G
That
PL,
that
we
were
talking
about
you
just
sound
what
we
said.
There
was
look.
If
you
have
a
flag,
if
you
are
going
to
save
feature,
is
no
she's
sort
of
an
optin
up,
opt
out
like
what
we
are
doing
here,
then
you
should
have
created
an
option
flag
of
some
sort
where
you
would
you
can
choose
between
the
two
instead
of
trying
to
use
the
features
like
to
do
that,
but
we
never
talked
about
adding
another
feature
flag
to
mean
the
opposite
of
the
feature
flag
that
we
are
jiaying
yeah.
B
So
from
the
maintenance
perspective,
like
from
the
perspective
of
cube,
ATM
I
see
this
extra
work
for
us
to
add
more
flex
or
change
the
current
behavior,
because
I
mean
we.
We
were
basically
waiting
for
cube,
DNA's
to
go
away
and
then
we
can
pretty
much
remove
the
future.
Get
I
mean.
Is
it
to
me?
It's
kinda
okay,
but
if
you
team
the
team
that,
if
you
think
that
we
should
remove
it
or
change
it,
I
guess
we
can
start
with
that.
I'm.
A
Okay
with
it
I
think
to
just
leave
it
the
way
it
is
so
long
as
we
we
keep
on
following
up
to
make
sure
that
and
keep
on
pushing
in
it,
so
that
the
defaults
are
actually
changed
because
I
think
what
happens
every
cycle
is
that
even
in
111
it
was
the
quote-unquote
objective
to
get
coordinates
to
be
the
default.
But
then
Google
pulled
the
strings
at
the
end
of
the
release,
because
the
results
were
not
what
they
wanted,
and
then
that
happened
again
in
112,
where
it's
not
defaulted,
for
their
configuration.
A
B
A
A
B
Think
we
should
probably
leave
it
to
alpha,
don't
change
the
gate
and,
like
we
say
forbid,
to
explain
some
more
in
the
dogs
I'm,
not
sure
where
that
we
have
a
problem
with
the
join
work
fault.
But
what
was
the
problem
there
if
you
join
so
if
the
cooperate
config
is
in
a
dynamically
configs
enabled
on
a
note,
you
cannot
modify
it.
What
was
basically
did.
D
D
A
B
A
D
D
B
B
A
I'm:
okay,
with
that
I
I,
think
the
configuration
in
places
the
highest-priority
getting
the
phases
in
place
as
per
the
two
highest
priority
items,
order
for
us
to
get
the
GA-
it's
not
an
ideal,
but
we
can
solve
some
of
these.
We
can
cleanly
have
external
HDD
stand
up
using
some
of
the
other
tooling
that
exists.
It
would
just
be
nice
to
have
it
all
lumped
into
Covidien.
A
I
I
get
it
it's
just.
You
know
right
now,
we
are
storing
configuration
state
in
a
config
map
that
we
act
on
on
upgrade
and
we
don't
provide
a
way
to
interact
with
that
config
map
to
facilitate
reconfiguration
during
upgrade
or
incorporating
modifications
that
have
been
made
to
the
cluster
during
upgrade.
A
It
seems
like
a
layer
violation
right
like
I.
Are
you
get
what
you're
saying,
but
it's
kind
of
like
who
has
ownership
to
see
the
world
right?
Kubb
ADM
is
a
tool
that
doesn't
actually
see
the
state
of
the
world.
It
only
acts
on
the
data
that
it
has
so
it's
kind
of
a
list
in
a
sense
it's
partially
stateless,
because
it
doesn't
actually
go
around
and
get
all
the
state.
It
just
grabs
the
state
from
something
other
location.
A
I
I
The
static
pods
for
the
control
plane,
arguments,
for
example,
or
you
know
that
that's
the
big
one
or
you'd
go
and
modify
kind
of
the
the
cubelet
configs
that
are,
you
know
if
you're
not
taking
advantage
of
the
dynamic
queue
bill.
It
configs
you're
modifying
the
cube
like
configs
that
are
on
disk,
and
we
need
a
way
to
you
know.
A
Say
like
this
is
kind
of
like
p3
at
this
point
for
this
release,
given
how
the
current
workload
that
we
have
listed
there
I
think
maybe
getting
agreement
or
consensus
on
how
we
would
have
solved
this
problem.
It's
a
hard
problem
to
solve
right,
I
think
maybe
agreeing
upon
the
approach
that
we
want
to
do
with
Rico
might
be
good,
but
I
can't
I,
can't
and
honestly
think
that
we
would
be
able
to
execute
on
this
within
this
cycle.
Cuz.
It's
it's
super
thorny
in
yeah,.