►
From YouTube: kubeadm office hours 2020-02-05
A
A
All
right
I
got
some
quick,
PSAs
community
school
system,
very
flaky.
If
you
have
peers
it
might
take
a
while
to
March
them
to
hearing
people
are
working
on
fixing
some
of
the
issues
in
the
communities
quality
sleep
on,
but
that
timeline
for
that
is
not
clear
to
me.
This
seems
to
be.
Has
some
sort
of
problem
as
well.
A
A
Thought
updates
about
Kapadia
moving
out
so
I
experimented
with
a
tool
for
auto
tagging
and
branch
Inc,
a
repository
based
from
another
repository,
and
this
worked
out
pretty
well
I-
have
some
sort
of
proof
of
concept
here.
Little
application,
so
Marek
last
time
proposes.
Maybe
we
should
be
using
github
workflows
to
automate
this
process
and
I
played
with
that.
Then
it
works.
So
we
could
definitely
do
that
instead
of
prowl.
A
A
A
A
Adium
in
the
cube,
ATM,
Ripple
I,
don't
know
another
tool
that
I
started
thinking
about
this
a
tool
for
automatic,
fast
forwards,
and
it
can
work
as
long
as
communities
is
consistent
with
its
tagging
sequence
around
the
release
cycle
and
I
need
to
ask
some
folks
about
some
of
the
expectations
there,
but
I
guess
after
this
this
experiment.
This
is
about
the
repository
branch
synchronization
data
about
the
potential
automatic
fast
forwarding
I
can
update
the
cap
to
reflect
these
discoveries
from
there.
C
A
Yeah
I
guess
until
communities
release
adopt
this,
we
have
to
maintain
it
ourselves
because
we
are
the
only
consumer
and
it's
all
clear
to
me
when
they
are
going
to
want
this.
To
my
knowledge,
see
storage
also
want
something
similar.
So
they
may
be
this.
The
second
consumer
of
this.
It
ended
up
in
the
lines
of
I.
A
Don't
know
one
thousand
five
hundred
lines
of
code,
but
it's
pretty
well
unit
tested
at
least
I'm
unit
testing
that
so
it
has
a
dry
run
and
I'm
unit
testing
the
dryer
on
as
well,
so
so
that
the
expectations
from
the
trailer
are
the
same
as
those
from
the
regular
yeah.
If
you
want
to
take
a
look
at
this,
you
I
guess
you
cannot
comment
because
there's
no
PR
yet
but
I
think.
A
A
D
I've
done
that
for
the
past
couple
releases
and
just
dropped
the
ball
I
think
I
recall
a
few
months
ago,
when
we
actually
released
117
I
tried
to
do
the
conformance
tests
and
three
of
my
tests
for
networking
or
failing
and
then
I
like
forgot
about
it
and
I
just
tried
to
run
so
nobly
on
a
117
Oh
cluster
last
night
and
I
got
12
failures
of
banging
categories,
so
I'm
kind
of
curious
like
what
what's
messed
up
and
just
like.
If
anybody
else
has
successfully
run
so
nobly
on
and
on
including
bootstrap
cluster.
A
D
A
D
A
A
D
I
mean
I,
have
a
pretty
beefy
laptop,
like
I
run
it
with
like
12
cores
like
three
nodes,
4
cores,
4
gigs,
each
and
I
think
it's
mostly
network
bound
like
pulling
all
of
the
containers
that
are
necessary
and
then
doing
all
of
the
I/o
and
the
polling
and
the
tests
is
like
it
doesn't
tax.
My
laptop,
it's
just
downloading
all
the
time
so
it
yeah
I
mean
it,
took
something
like
8400
seconds
to
run
all
of
the
tests
after
bootstrapping
the
cluster,
which
you
know
takes
5
minutes.
D
A
D
D
A
A
A
D
A
In
the
long
term,
I
would
prefer,
if
we
kind
of
unify,
because
not
everybody
wants
to
run
the
background
setup
I,
wonder
what
maybe
we
can
use
kinder
because
kind
of
country
is
much
more
more
transparent
completely
compared
to
the
stock
kind.
What
kind
is
opinionated?
It
applies.
A
bunch
of
networking,
patches
and
stuff
like
that,
and
we
can,
if
we
use
kinder
kinder,
is
transparent.
A
D
The
idea
was
like
the
reason
we
picked
vagrant
over
anything
else
was
it
was
not
AWS
or
GCE
right
like
it
was
not
a
vendor
solution.
You
know
where
it's
like.
Okay,
these
people,
own
conformance
right
like
we
wanted
it
to
be
as
as
agnostic
as
possible.
So
I'm,
fine,
you
know
if
if
the
docker
and
docker
solution
meets
our
needs,
that's
cool
with
me.
D
A
D
I'll
I'll
login,
as
and
I'll
post
it
up
in
meeting
nuts,
all.
A
D
B
A
B
B
A
D
C
Yeah
I
know
that
PR,
but
I
think
that
it
isn't
actually
working
too
much
for
us
at
least
laughing
that
touched
qadian
and
I.
Don't
agree
with
the
fixes,
so
I
think
I
would
actually
ask
for
basically
putting
it
on
hold
and
30
other
of
the
PR
didn't
actually
maintain
it.
So
it's
probably
going
to
be
closed
at
some
point
automatically.
A
C
A
A
Random
loop
from
Google
with
signal
maintainer
submitted
a
PR
yesterday
and
he
basically
exposed
the
pulse
image
as
a
field
in
the
coastal
configuration
I.
Try
to
explain
that
D
1
beta
2
is
locked
at
this
point
and
he
commented
here.
Instead
we
closed
the
PR
and
he
commented
here
that
is
going
to
be
nice.
If
we
support
customizable
policies
and
honestly
I,
don't
see
a
problem
with
that,
you
can
build
your
own
boss,
image
I.
Think.
Actually,
it's
sue
saying
or
doing
something
like
that.
For
some
reason
they
were
using
custom
powers
image.
A
C
Yeah,
so
there
is
a
slight
problem
here
with
the
pulse
image,
so
we
actually
specify
that
to
cubelet,
but
the
cubelet
is
not
actually
making
any
use
of
the
powers
image
itself.
It
just
forwards
the
option
to
the
doctor
shim
layer.
So
basically,
the
thing
that
is
making
use
of
this
option
is
the
the
CRI
implementation,
and
this
goes
even
beyond
the
cubelet
and
I.
Think
that
the
day,
for
example,
an
overwrite
of
the
post
image
is
not
going
to
end
up
being
take
picked
up
by
container
d
or
cry
or
whatever
just
so.
C
A
C
A
A
C
A
C
D
C
Yeah
I
think
that,
like
many
people
would
like
to
configure
many
different
to
the
settings
that
are
basically
hard
call
it
inside
of
the
container
of
the
CRI,
album
and
I
think
that
this
is
probably
one
of
those
things
but
again,
I
need
to
actually
check
with
the
code
base
to
see.
If
that's
the
case,.
A
E
A
Let
me
refresh
over
here
yeah.
We
have
these
weird
commands
in
the
API
example,
dogs,
which
is
kind
of
bad
I
honestly,
can
send
a
pair
to
remove
this
completely
from
v1
bit
on
a
bit
lumpy
the
two
or
we
can
just
leave
them
there
to
not
break
our
I,
guess
quote-unquote
policy
of
not
touching
the
api's.
A
E
E
C
A
A
D
A
A
Was
also
thinking
about
another
installer
of
a
different
kind,
but
not
sure
if
this
is
the
right
meeting
for
it.
Yes,
we
have
time
just
wanted
to
mention
that
I
was
thinking
that,
instead
of
supporting
this
I,
don't
start.
That
is
like
a
bundle,
because,
with
the
discussions
from
last
time,
we
leaning
towards
another
listener
that
bundles
some
versions
of
others,
instead
of
being
able
to
install
any
arbitrary
atom.
This
is
like
pretty
much
the
coastal
bundle
solution
from
Google,
it's
exactly
the
same,
except
that
they
already
have
it
implemented.
A
B
A
So
I
was
thinking
about
the
general
solution
of
hey
what
if
we
have
a
folder
with
a
bunch
of
llamó,
and
we
know
what
llamo
maps
to
what
atom
and
we
know
what
images
another
wants
to
pull
based
on
the
yellow
in
the
folder,
and
we
can
tell
queue
medium
to
just
throw
some
apply
on
this
folder
and
there
we
have
a
list
of
apply
domes
I'm,
not
sure
why
we
didn't
consider
that
maybe
I'm
missing
something
important.
That's.
D
What
you
just
mentioned
about
associating
an
add-on
to
a
folder
or
bundle
of
manifests.
That's
reachable
through
some
transport
is
basically
a
package
and
that
exact
mechanism
is
supported
by
the
add-on
installer
so
like.
If
the
user
wants
to
supply
a
component
config
for
the
add-on
installer
that
says,
hey
I
have
this
local
directory
with
all
of
these
manifests.
Please
call
it.
You
know
my
add-on
or
my
CNI
installation
or
my
for
of
Cordia
nests.
Then
the
the
add-on
installer
will
go
and
it'll
do
a
rolling,
apply
on
that
and
then
yeah.
D
If
they
decide
to
remove
it
from
the
you
know,
what's
matching
the
in
cluster
component
config
then
it'll
get
pruned.
You
know
that
kind
of
thing
so
that
that
is
how
it
works,
and
then
it
you
can
also
do
it
from
a
git
repo.
You
can
also
do
it
using
the
official
customized
support.
That's
currently
in
groups
et
al.
We
want
to
be
able
to
do
that
with
a
no
CI
image.
That's
sign
a
bowl
and
distributable.
We
want
to
be
able
to
do
it
with
cluster
bundle.
D
You
know,
that's
that's
an
extensible
thing
where
the
API
you
know
could
have
additional
fields
for
something
like
helm
or
a
it's
depending
on.
You
know
what
scope
and
adoption
there
is
right,
but
we
wanted
to
build
the
minimal
thing
that
was
simple
and
just
worked,
and
so
basically
applying
manifests
from
a
folder
or
a
git.
Repo
like
was
the
or
HTTP
reachable
manifests
through
groups.
Etl
was
like
the
simplest
thing
we
could
think
of.
D
A
D
Yeah,
basically
it's
it's
literally
just
coop
cuddle
apply
whatever
bundles
of
things
currently
coop
cuddle
supports
and
then
just
adding
a
component
configure
on
that
and
making
it
make
sense.
So
it's
not
just
like
hey
give
me
a
directory
and
there's
no
structure
like
add
the
minimal
amount
of
structure
needed
to
differentiate
this
add-on
from
this
add-on
and
version
it
in
an
API.
E
D
So
the
way
that
I
envision
that
working
like
decoupling
it
from
what's
currently
inside
of
Cuba
diem
is,
if
you
don't
provide
an
add-on,
installer
config,
then
kuba
diem
will
have
the
internal
override.
That
says,
for
this
version
of
kubernetes,
like
here's,
the
recommended
add-on,
installer
config
and
then
the
coumadin
config
commands
would
output
that
as
well
right.
So
if
the
user
wants
to
extend
the
default
for
that
kubernetes
version,
it
would
work
and.
D
So
currently
that
would
be
supported
over
a
network
using
HTTP
and
then
get
and
then
like
that,
that's
that's
what's
implemented
as
a
minimum
right
now,
so
for
the
Alpha,
we
could
publish
those
official
add-ons
in
the
cluster
add-ons
git
repository.
We
could
push
it
to
a
GCS
bucket.
You
know
owned
by
the
kubernetes
project.
D
There's
a
bunch
of
ways.
We
could
distribute
those
things,
so
it
could
be
HTTP,
it
could
be
get.
Probably
HTTPS
is
fine
right
for
for
the
default
add-ons.
Yes,
and
then
you
know
we
could
consider
using
OCI
or
something
if
that
develops
to
be
a
standard.
I
know
that
there
are
some
folks
from
Red,
Hat
and
Google
right
now,
who
are
collaborating
on
the
manifest
bundle
kept,
which
includes
an
OC
I
packaging
format,
because
the
conversation
in
the
add-ons
group
has
been
that
we
don't
think
it
would
hurt.
D
A
D
It
might
be,
you
know
more
helpful
in
an
air-gap
scenario,
but
I
mean
you
could
literally
just
download
files
like
to
a
folder
from
whatever
Koopa
team
is
managing
as
well.
It
gets
a
little
bit
weird
when
you're
talking
about
like
source
provenance
and
like
high
compliance
environments
and
stuff.
That's
when
people
want
like
signed
artifacts
for
what
things
end
up
in
places,
but
most
of
that
is
satisfied
by
a
git
repository
as
well.
So.
A
D
Have
not
linked
to
that
No,
so,
let's
yeah
and
I'm
I'm
thankful,
as
well
for
the
patience
that
all
of
you
is
exhibited
with
this,
because
it's
been
really
quite
difficult
for
me
to
get
the
time
to
do
this
to
the
level
of
quality
that
it
probably
insights
and
deserves.
So
thanks
for
the
continued
effort
in
this
area,
thank.
A
D
A
Using
rocky
walk
itself,
if
I
think
this
is
I,
think
it
came
up
from
this
issue.
Fabrizio
was
not
happy
that
we
have
a
mess
and
types
of
logging
in
queue
medium,
so
he
created
this
issue
and
a
team
said
a
QE
immediately
set
up
here.
This
is
doing
a
lot
of
stuff
actually
agree
with.
This
I
saw
discussion
today
with
Jordan
widget
and
paint
the
other
that
they
don't
like
this
pattern.
But
I
do
like
this
because
you
define
your
locker
behind
a
rapper
somewhere
in
the
code
base
and
you
don't
care.
C
A
If
we're
calling
K
walk
on
everything,
that's
I,
don't
think
that's
great
for
the
user.
I
think.
Ultimately,
what
k
walk
should
have
supported
is
a
way
to
remove
the
you
know
the
data
at
the
beginning,
which
is
the
file
location,
the
type
of
dialog
entry,
this
stuff,
the
header
or
whatever
is
called
it's.
It's
not
customizable.
We
if
we
have
a
PR
for
the
chemo
and
we
remove
this
part,
we
can
use
the
same
log.
We
can
you
scale
what
for
everything
and
based
on
a
flag
by
the
user.
A
D
Also
interested
in
this
for
the
Installer
library,
which
takes
in
a
stream
that
it
expects
to
be
able
to
write
to,
we
could
have
that
stream
be
from
the
logger
and
then
find
eliminate
or
something
but
right
now,
like
I'm,
just
passing
in
into
the
libraries
dependencies.
It's
a
standard
in
and
standard
out.
D
That's
what
I'm
doing
here
but
I,
don't
know
if
we
have
any
place
where
we
have
like
a
writer
to
log
logger
abstraction,
because
otherwise
you
just
end
up
with
something
naive
claimed
like
you
passed,
standard
out
and
standard
error
and
what
happens
happens.
So
if
we
don't
have
a
log
wrapper
that
wants
to
do
something
with
that
stream,
then
it's
a
little
a
little
weird.
A
D
A
C
A
C
Because
if
you
barium
detects
multiples
here,
eye
sockets
available
and
those
are
not
play
a
co-current
container
d
CR
eye
sockets,
simply
because
in
modern
Dockers
you
actually
get
both
sockets
and
both
present
docker
is
going
to
be
chosen.
But
if
you
get
something
else,
besides
doctrine
container
D,
the
code
is
going
to
bail
out
an
error
and
ask
the
users
to
specify
explicitly
which
socket
to
use
yeah.
D
A
C
A
Tricky
tricky
case
we
can
just
give
up
here.
We
should
just
leave
it
in
the
pod
config,
and
if
people
don't
want
it,
they
should
I,
don't
know
what
removed
annotation
manually.
It's
unfortunate
that
we
added
it
originally
here,
but
we
we
have
a
major
problem
with
making
changes
to
faces.
We
don't
have
a
policy
in
it.
There
is
no
good
way
to
do
that
without
breaking
users,
because
you
are
exposing
your
implementation
details
with
the
face
breakdown.
A
C
Yeah
I
saw
I
closed
the
config
present
implementation,
although
I
messed
up
initially
and
cause
this
PR,
but
they
opened
it
I'm
working
right
now
on
this.
Basically
tackling
public
uses
feedback
and
also
experimenting
with
a
few
other
things,
and
probably
tomorrow
or
on
Friday,
about
the
new
version.
C
It's
good
to
have
it
before,
like
version
or
two
before
we
actually
introduce
a
break
in
change
in
some
of
the
component,
config
versions.
So,
for
example,
if
the
cue
box
is
going
to
get
a
new
alpha
version,
config
in
119
and
drop
of,
do
all
version
alpha
one
in
120,
then
it's
probably
worth
it
for
us
to
merge
this
as
soon
as
possible,
so
that
the
cube
ATM
generate
the
configs
get
actually
signed.
C
And
this
means
that
more
clusters
are
going
to
have
signatures
and
more
users
will
actually
benefit
this
patch
so
that
the
old
generated
convicts
are
actually
thrown
out
that
we
generated
without
users
having
to
basically
go
up
and
manually
patch
component
convicts
that
are
generated
by
qadian,
which
is
not
fun
at
all.
I'm.
A
C
This
I'm
tempted
to
the
work-in-progress
state,
so
the
idea
behind
this
PR
is
basically
if
Cuba
name
is
dealing
with
a
newer
version
of
a
component
config
that
was
specified
by
a
user.
It's
basically
going
to
treat
it
as
a
byte
top
move
it
along
without
actually
doing
any
modifications,
and
without
bailing
out
and
I'll
saying
that
this
version
is
not
the
one
that
cube
ATM
recognizes.
C
But,
however,
there
is
a
open
pier
by
Mac
to
oven,
for
instance,
versuch
shared
component
convicts,
and
some
of
the
details
of
that
PR
may
actually
have
impact
on
this
one
and
I'm
basically
waiting
here
for
it
to
merge
and
to
be
able
to
cover
it
here.
So,
for
example,
if
we
actually
deal
with
instance,
specifically
fix,
where
do
we
actually
place
those,
even
if
they're,
by
blobs?
Where
do
we
actually
what
common
I
options?
We
feed
the
component
and
stuff
like
that.
So
it's
some
details
here
that.
C
This
is
not
necessary,
so
basically,
all
of
the
implementation
of
the
new
component
config
versions
scheme
is
not
necessary
to
exactly
in
118.
It's
mostly
like
it
was
mostly
done
with
the
Raval,
so
the
kind
to
group
PR
and
with
dropping
off
the
internal
types
all
of
the
rest
of
the
PR.
So
basically,
you
exchanges
and
helping
users
deal
with
new,
much
more
harsher
component
config
gambling
scheme.
C
A
Yeah,
like
I,
said
I'm
going
to
be
very
interested
how
this
is
going
to
develop,
maybe
in
the
future
we
can
have
for
next
year.
Maybe
we
can
have
a
survey
asking
the
people.
Okay.
What
do
you
think
about
this
new
way
of
managing
cooperate?
Config,
do
you
want
the
kubernetes
project
automatically
manage
this
for
you
with,
for
example,
automatic
compilation
on
the
side
of
components
with
flags
for
endpoints?