►
From YouTube: 2017-07-11 17.04.57 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
First
one
was
that
I
just
thought
it
was
worth
potentially
reviewing
our
original
priorities
with
cycle
versus
what
we're
currently
working
on,
not
because
I
think
that
anything
we're
working
on
is
bad,
but
just
because
I
wanted
to
if
we
are
making
changes,
I
wanted
it
to
be
a
conscious
decision,
but
I
wasn't
organized
enough
to
find
the
link
to
the
document
that
captured
our
original
priorities.
So
maybe
we
can
do
this
a
little
bit
later
on
in
the
meeting
and
I.
B
A
A
C
D
A
A
Yeah,
let
me
put
this
here
so
I
mean
I,
guess
it
might
just
be
worth.
If
could
someone
I
guess
it
might
actually
just
be
worth
doing
a
stand-up
quickly,
which
is
something
we
haven't
done
in
a
while,
but
like
who's
working
on
what
so
that
we
can
capture
that,
so
that
we
have
something
to
do
the
difficult
so
yeah
who
has
been
working
on
things
recently?
Who
can
summarize
what
they're
working
on
in
short
sentences,
small
number
of
short
sentences.
B
A
F
G
Every
awesome
new
status
of
them
yeah,
we
got
it
so
I've
been
working
on
figuring
out
how
we
can
do
better
TLS
validation
during
the
discovery
process
in
gue
bottom
I
have
a
proposal
which
is
almost
ready
to
share.
I
did
not
quite
have
enough
time
to
get
it
ready.
I'm
going
to
talk
about
it
later.
I
also
found
a
little
bug
yesterday,
as
five
seconds.
Please
include
extra
bottom
join
got
that
fixed.
It
was
my
very
first
kubernetes
PIR
yay.
H
I
I've
been
experimenting
with
an
sed
operator
that
doesn't
require
communities,
API
server
to
be
running,
which
should
like
get
rid
of
the
circular
dependency
issue
and
then
I'm
going
to
look
at
the
add-on
manager,
which
I've
also
promised
for
I,
think
one
six
that
I
get
to
deliver.
Sorry,
what
was
the
first
thing
again
just
for
the
put
on
button
version
of
the
edge
TV
operator
that
doesn't
require
the
AP
but
Renee's
API
server
for
the
activity
manager,
maybe
or
at
city
controller
cool,
where.
I
J
A
K
K
C
Status,
yeah
I
have
made
a
couple:
ps4
cube
admin,
I'm
working
on
getting
things
into
one
seven
that
needs
to
get
into
one
seven
bunch.
We
had
some
well
unfortunate
well
now,
when
we
use
the
node
authorizer,
we
uncovered
some
misconceptions:
this
configuration
in
some
of
the
users
like
environment,
so
oh
well
anyway,
so
the
node
name
in
the
search
and
the
API
doesn't
match.
So
we're
fixing
that
by
adding
a
flag
to
override
the
the
node
name,
is.
D
C
Here
we
go
or
how
I'm
not
sure
how
to
pronounce
it
anyway.
Is
it
working
on
that
the
actual
coding
limitation,
and
also
some
other
fixes,
had
to
be
merged
into
the
one
fan
branch
I'm
working
on
the
upgrade
proposal,
and
also
I'm
thinking
about
AJ
a
little?
How
we
can
a
little
about
the
what
Justin
said:
CD
manager,
how
to
do
stuff
like
that
and
load
balancing
from
those
two
masters,
and
so
so
that
is
possible
to
make
a
better
upgrade
plan
for
one
eight
to
nine
open
sea.
K
Sorry
totally
forgot
to
mention
that
I
have
a
PR
in
flight.
I
did
I've
been
on
vacation
I
just
got
back
from
vacation
yesterday,
so
I'm
still
playing
catch-up,
but
I
should
have
time
to
look
at
it
today.
It
was
in
a
bad
state,
I
think
because
of
upstream
changes,
so
it
still
needs
some
fixes,
but
yeah.
That's
on
my
plate.
Okay,.
A
L
L
So
if
you
care
about
that
upgrade
and
also
there
was
some
scalability
fixes
that
came
in
late
into
the
window-
7.0
release
for
the
certificates
API
specifically-
and
it
is
still
not
fast,
but
it
is
faster
than
it
was
when
when
1.7
was
cut,
I
think
there's
a
limit
of
five
node
joints
per
second
or
something
like
that,
something
not
optimal
but
better
than
like.
Pathological
like
it
was
when
we
got
the
release
so
further
work
and
one
that
will
improve
performance
of
certificates,
API
cool.
L
A
Good,
okay,
so
yeah
I,
guess
it's
just
a
story.
Does
anyone
else
have
any
art
students
they
won't
like
I?
Can.
E
Bring
up
two
things:
real,
quick,
they're,
not
another,
but
I
took
the
bait
a
couple
weeks
ago
and
Lucas
asked
for
someone
to
implement
the
localhost
check,
he's
at
that
now
fixed
in
one
seven
game
to
Bernays
itself,
but
he
asked
me
to
report
that
back
to
one
six
and
I
will
get
to
that
this
week.
I
promise
and
the
other
thing
I
want
to
bring
up
at
some
point
is
it
looks
like
a
cube.
E
C
Real
quick
we're
setting
SBCC
something
on
that
CD
host
Pat,
mount
or
volume,
and
that
doesn't
seem
to
work
in
chorus,
but
it
somehow
works
on
I
think
it
works
on
on
sent
out
samples,
but
not
chorus
and
not
federal,
something.
It's
it's
really
mess
and
we're
thinking
that
if
we
remove
that
DCT
thing
in
the
ATB
manifest
where
it
will
work
everywhere,
like
concourse
as
well.
But
we
have
to
settle
for
zero
everywhere,
but
I'm,
not
sure.
If
that
friendly.
B
C
A
Cool,
so
thanks
everyone
to
the
updates,
I
guess
yeah.
This
all
looks
reasonable.
I
mean
I.
Don't
think
that
there's
anything
insane
here
with
respect
to
our
previous
previously
agreed
priorities,
I
I,
guess
just
to
reiterate
that
our
priorities
that
we
agreed
to
the
sig
were
closed.
Ten
percent
total
issues
improved
test
coverage
for
kube
admin.
These
are
all
p0s
work
on
upgrades,
work
on
self
hosting
and
then
request
that
other
six
begin
to
move
their
control,
plane,
components
and
add-ons
to
use
component
config.
C
A
F
B
Were
planning
and
working
on
the
scheduler
together,
because
I
have
okay,
I
already
talked
with
he's
already
done
the
proxy,
and
that
was
updated
according
to
Mike's,
spec
and
I
had
talked
with
Mike
and
we're
out
in
California
about
scheduler,
potentially
being
next
there's
some
logistical
problems
with
the
scheduler
because
of
how
it
reads
in
information
in
two
different
ways,
but
we
planned
on
at
least
trying
to
tackle
pieces
of
that.
If
this
cycle
I
don't
know
if
it'll
get
done
at
this
point,
because
everybody's
busy,
but
will
the
least
get
started.
L
A
Good
stuff,
okay,
I'm
going
to
try
and
keeps
moving
in
the
interest
of
time.
So
I
made
a
note
that,
from
what
Jacob
said
earlier
that
there's
a
PRN
float
returning
on
cue,
Badman,
PR
testing-
that's
just
made
it
into
the
minutes.
The
next
item
on
the
agenda
is
that
I
just
wanted
to
shout
out
on
behalf
of
Lucas.
To
please
take
a
look
over
the
upgrades
proposal.
A
A
C
A
B
A
I
Yes,
indeed,
I
presume
is
my
line.
I'm
just
scrolling
down
where
we're
on
the
minutes.
But
yes,
so
I
think
like
to
medium
is
making
amazing
progress
and
I.
Think
the
the
next
big
step
is
to
get
it
all
the
installers,
using
it
and
I
think
that
will
both
benefit
all
the
installers
and
will
benefit
qadian
from
sort
of
you
know
directing
you
know
the
the
actual
problems
that
like
cops
is
facing
tectonic
is
facing
and
gke
is
facing.
I
I
know,
there's
sort
of
unofficial
efforts,
but
I
want
to
see
if
we
could
make
that
more
official.
Somehow-
and
you
know
we
have
our
short
term
like
per
release
goals,
but
we
don't
like
have
an
explicit
manned
or
we
want
an
explicit
like
group
that
is
trying
to
drive
this
big
picture
just
actually
get
to
be
a
DM
used
by
Olien
solar,
so
one
way
which
we
could
do
it,
which
has
a
very
obvious
flaw,
but
is
a
good
strong
man
is
to
like
have
sig
cluster
lead.
I
Sequestered
lifecycle
leads
from
each
of
the
installers
and
like
a
maintain,
a
commitment
that
every
installer
is
going
to
use
it.
The
the
obvious
flaw
is,
we
don't
really
want
20
sig
sig
leads
right,
but
I
thought
that
was
a
straw.
Man,
I
guess.
The
two
questions
we
have
are
where
I
have
our
like,
or
are
all
the
installers
on
board
with
using
cube,
ADM
and
I?
B
It's
possible
for
us
to
own
pieces
that
would
be
common,
that
other
people
could
leverage
like
if
we
started
to
create
like
very
simple,
reusable
modules,
like
you
know,
for
terraform
or
I,
don't
know
if
we
want
to
include
some
generic
ansible
galaxy
thing.
If
somebody
wants
to
do
something
like
that
for
this
piece,
but
I
think
having
the
base,
primitives
for
reusability
seems
like
an
ownership
thing
that
we
could
take
on,
especially
once
we
get
our
own
repository.
I.
Think
at
this
point
might
be
a
bit
much.
F
How
do
we
take
cube
admin
where
the
primary
consumer
here
is
people
who
are
running
q
Gavin
directly,
and
we
shift
our
thinking
a
little
bit
and
add
as
a
consumer
for
the
product
folks
who
are
writing
other
installers
and
our
wrapping
cube
Edmund
as
a
toolset
and
I
think
that
you
know
we
built
it
with
that
in
mind,
but
we
haven't
necessarily
sort
dotted
all
the
is
and
crossed
all
the
t's
so
right
or
dotted
all
the
t's
cross,
all
the
whatever
we
haven't
done
all
that
last
mile
stuff
to
get
to
the
point
where,
where
we
really
know
that
it
meets
needs,
so
finding
a
way
to
actually
force
that
and
reconcile
that
and
make
sure
that
we
do.
F
That
group
is
a
customer,
I
I.
Think
a
working
group
sounds
like
a
great
idea:
I
think
he's
setting
it
as
goals.
You
know
I
think
in
some
ways
we're
resource
constrained,
but
you
know
collecting
designs
and
goals
around.
That
makes
a
ton
of
sense
in
terms
of
adding
leads.
I
think
we're
still
across
kubernetes.
F
We
don't
have
a
really
great
shared
understanding
of
what
it
sigelei
is
and
does
the
thing
that
I've
been
pushing
for
more
than
anything
else
is
to
have
a
way
where
the
sig
lead
is,
is
only
a
facilitator
and
actually
has
no
real
decision-making
power.
It's
really
about
organizing
and-
and
you
know,
keeping
everybody
on
tack
on
task,
and
so,
if
that's,
where
things
are
I,
don't
think
adding
sig
leads
here
now.
What
do
we
replace
it
like?
What
is
sort
of
the
architectural
steering
group
of
cluster
life
cycle?
I?
F
H
Actually,
you
take
back
your
own
opinion
and
your
moderating
you
making
sure
that
everyone
can
contribute
and
keep
trying
to
balance
between
the
different
poles
in
different
directions
and
not
you
know,
try
to
push
your
own
agenda
or
or
influence
that
that
said,
I'm
good
to
contribute
to
any
kind
of
task
force
or
working
group,
whatever
I'm
not
intimidated,
but
I
ain't
not
an
issue
to
an
extent
of
shift
since
it
was
proposed
there,
Ken
or
or
we'll
be
able
to
do
and
of
that.
But
grandpa.
I
Yeah
I
mean
I.
I
would
just
say
that,
like
the
the
having
the
a
group
that
like
building
the
right
tool
for
building
requirements
for
tectonic,
also
serves
people
that
are
running
cube,
ATM
directly,
because
people
they're
running
qadian
Berkeley
are
probably
like
building
something
around
it.
So
it's
more
that
we
we
have
an
example
that
is
open-source
and
we're
able
to
work
around
again.
F
C
C
So
having
having
a
share
a
group
work
group
or
something
makes
a
ton
of
sense
to
actually
get
get
eyes
on
this
because
I
mean
I,
can
I
happily
implement
all
these
three
factors
and
things
and
just
go
with
it
and
call
it
a
day,
and
then
nobody
will
use
it.
It
was
the
wrong
level
of
abstraction,
so
I
agree
with
the
goals
of
such
a
work
group.
I
F
I
think
it's
totally
reasonable
to
sort
of
announce
that
we
want
to
form
a
working
group
around
those
stuff
at
the
community
meeting
arm
and
solicit
sort
of
who
wants
to
be
involved,
and
you
know
and
write
up
a
little
bit
of
a
charter
there
around
like
look.
We
want
to
build
a
common
tool
set
to
sort
of
extract
as
much
commonality.
I
F
And
I
think
you
know,
as
we
toured
to
talk
about
working
groups,
there
were
some
working
groups
that
were
like
sort
of
across
you
know
the
project
as
a
whole
and
some
working
groups
that
were
really
sort
of
really
sort
of
a
subgroup
of
a
sig.
This
was
probably
more
of
the
subgroup
of
a
sig
right,
but
we
want
to
make
sure
that
we
get
as
much
involvement
and
as
as
much
as
many
people
and
sort
of
you
know.
You
know
giving
us
requirements
here,
as
as
we
can
and.
C
From
what
I
understand,
what
group
is
different
from
the
sig,
in
the
perspective
that
it
doesn't
own,
relate
the
code?
Is
that
it
it's
more
like
an
effort
for
like
two
months
or
something
whatever
is
required
or
a
year
like
this
thing,
but
anyway
sharing
the
signals.
Lifecycle,
like
general
goals
and
and
code
cold
Sokol.
A
Do
it
well,
it
has
my
blessing
and
the
notice
censors
if
you're
a
dissenter.
Please
speak
now
the
formation
of
a
working
group
to
try
and
get
all
the
installers
using
cube
admin.
Maybe
the
working
group
can
also
agree
on
how
to
pronounce
cube
admin,
no
just
kidding
okay,
who
else
okay,
so
we
shifter.
We
talked
about
that
from
it.
Yeah.
A
Cool,
so
everyone
in
the
meeting
please
go
and
try
reshift
ER
and
get
feedback.
It
may
also
be
worth
mentioning
the
same
instead
cluster
ops,
if
you
haven't
done
that
the
reason
I
mentioned,
that
is,
that
ostensibly
sick
cluster
ops
has
actual
operators
of
Cuba
Nettie's
clusters
in
it.
No
didn't
say
thank
you,
so
you
may
find
a
more
fertile
ground
of
actual
users.
There.
A
Cool
but
yes,
everyone
should
still
try
it
and
play
with
it.
Jacob.
You
want
to
talk
about
the
cube
admin
extraction.
K
Plan
that
sounds
like
a
dental
procedure.
You
adding
to
the
list
of
things
that
everyone
should
review
and
give
feedback
on
yeah,
so
Ed's
link
to
it
at
a
very
high
level.
I
think
we've
already
agreed.
We
want
to
cuddle
to
go
first,
we
there's
a
link
embedded
in
my
document
to
their
plan
and
the
little
wrinkle
is
that
their
their
plan
is
much
more
complex
by
a
necessity.
K
There
they're
just
more
moving
parts
that
they
have
to
worry
about
and
they're
more
coupled
to
kubernetes
course,
so
the
timeline
keeps
getting
dragged
out
and
so
at
a
high
level,
I
wanted
to
kind
of
hit
the
same
milestones.
I
mean
n
minus
one.
Let
them
fit
a
milestone.
First,
prove
out
that
they've
got
something
working
and
then
let
us
you
know,
follow
in
the
footsteps,
but
their
plan
now
extends
to
1.13,
which,
if
I've
done
my
math
correctly
is
like
two
years
in
the
future.
K
So
I
don't
know
if
we
want
to
take
two
years
to
move
to
our
own
repository,
but
at
a
really
high
level,
there's
just
a
phase
for
removing
cross
dependencies.
We
have
dependencies
on
communities
core.
There
are
actually
a
few
dependencies,
including
this
core
back
on
queue
bed,
and
so
we
can
remove
those
you
refactor
into
the
shared
repositories
or
there.
There
are
a
few
tiny,
tiny
functions
that
are
just
little
thin
wrappers
around
other
things.
Maybe
we
can
duplicate
code
if
it
doesn't
make
sense,
put
it
in
shared
repositories.
K
We
can
enable,
or
we
can
enforce,
that.
We
don't
add
new
cross
dependencies
using
babel
rules
and
we
have
a
Basel
presubmit
verify
run
that
will.
Even
if
people
are
running
make
locally,
it
will
verify
that
we
don't
accidentally
add
new
dependencies,
mirror
the
code,
the
relevant
code,
mostly
read-only,
to
the
new
repository.
What
would
build
up
tooling
to
you
actually
do
the
build
and
release
and
handle
our
dependencies.
K
Things
like
that,
get
that
working
while
we're
still
primarily
developing
the
main
repository
clone,
our
end-to-end
tests
to
run
against
the
newer
Nitori
and
then
get
the
test
infra
bits
working
like
the
submit
queue
and
other
proud,
plugins
on
and
ultimately
shadow
or
do
a
shadow
release
with
the
new
repository.
At
the
same
time,
we
do
at
home
proud
with
their
own
repository
with
old
process,
to
make
sure
thing
kind
of
line
up
before
we
do
the
switchover.
So
there
are
more
details
in
the
dock.
Definitely
looking
for
a
feedback,
especially
anything
that
way.
B
So
I
have
long-standing
opinions
about
the
repo
breakup,
but
I
will
set
those
aside
to
say.
I
think
I
think
the
only
prerequisite
if
we
don't
want
to
shadow
who've
cuddle,
which
I
actually
would
want
to
shadow,
is
that
the
testing
infrastructure
has
been
fully
vetted
for
the
cross
repo
dependencies,
because
that
still
is
in
this
weird
state,
where
no
one
has
done
anything
yet
and
there's
a
lot
of
talk,
and
it's
likely
terrifies
me
because
there
are
conditions
that
we've
already
enumerated.
That
is
not
supported
by
the
current
testing
infrastructures.
K
Okay,
I
think
there's
some
prior
art
there
with
the
way
cops,
do
then
doing
tests
and
the
fact
that
when
you
run
cube
admin
we
dynamically
reference,
the
control,
plane
images,
so
we're
not
really
your
statically
linked
to
them.
I
I
think
it'll
work
well,
but
it's
definitely
that
considerations
yeah.
We
should
let
it
more
heavily
make
sure
we
have
a
strong
plan.
A
A
G
They're,
just
that
has
sort
of
a
rough
outline
of
what
I'm
proposing
the
motivation
for
this
is
basically
another
way.
The
discovery
token
protocol
works
right
now.
The
worker
and
the
master
share
symmetric
token,
which
would
be
fine
if
there
was
just
one
master
on
one
worker,
but
because
there
is
many
workers
means
that
any
worker
can
impersonate
the
master
to
other
workers.
G
So
if
an
attacker
was
able
to
leak
a
bootstrap
token,
a
discovery
token
through
something
like
a
node
compromise
or
through
something
a
little
less
exciting,
like
server-side
request,
forgery
against
like
clog
metadata
provider,
anything
interpret
the
bootstrap
token
and
they
have
some
sort
of
privilege
network
access
to
do.
Man-In-The-Middle
between
when
a
new
node
comes
online
sort
of
man-in-the-middle
that
notes
connection.
They
can
impersonate
themselves
as
the
master
and
do
bad
things
to
your
cluster
I.
G
Don't
think
this
is
an
incredibly
severe
threat
because
so
most
of
the
scenarios
that
it
applies
is
you
know
it's
not
trivial,
necessarily
to
get
a
copy
of
one
of
these
tokens
as
a
totally
untrusted
attacker.
It's
also
not
trivial
to
have
that
kind
of
network
access
scenarios
where
I
think
it
could
be
sort
of
dangerous
right
now
are
like
physical
networks.
Where
are
choosing
an
exact
could
be
great
possible.
G
G
Code
there's
a
lot
of
mitigating
factors,
so
my
proposals
are
fixing.
This
is
basically
to
add
TLS
root
certificate
pinning
to
coupon
up
join.
So
when
the
groove
Ataman
it
runs,
generate
the
root
CA
or
you
have
a
root
CA.
You
hash,
the
public
key,
take
a
fingerprint
of
that
public
key
and
include
it
as
a
new
flag.
Tuukka
bottom
join
to
bottom
join,
never
has
a
pin
public
key,
it
does
the
discovery
checks,
that's
the
root
CA
that
it
discovers
matches
that
pin.
G
This
gets
tricky,
there's
sort
of
like
two
scenarios
that
Joe
named
swivel
clusters
or
self
stitching
clusters,
so
that
the
swiveled
case
is
where
you're
like
running
the
commands
you
run
at
you
running
the
init
command.
Then
you
copy
paste
the
join
and
you
run
the
join
that
case.
It's
easy
and
we
can
make
that
case
safe
by
default.
You
can
avoid
this
longer
copy
logarithm
to
come
a
little
bit
longer
copy.
It
has
an
extra
hash,
an
extra
flag.
G
G
That
case
is
really
hard
to
care,
so
my
proposals-
we
just
leave
that
alone,
you
let
it
work
the
current
way
it
does
but
document
it
as
having
this
weekend
security
model
and
then
I
sort
of
proposed
some
phases
I
found
out
how
to
roll
this
out,
including
potentially
adding
flag
to
come
out
and
join
that
eventually
we
make
mandatory
that
says
this
is
unsafe,
like
if
I'm
not
pinning
a
cert.
That's
unsafe
amateurs
do
your
proposal
here,
thoughts
on
like
sort
of
proposal
and
also
much
especially
on
that
the
phase
pay
its
roll
out.
C
C
Actually,
it
is
possible
for
the
which
self
stitching
cluster
where
you
you
have
to
provide
the
see
you
have
to
know
the
CA
in
a
balance
of
course,
and
then
you
manually
take
the
hash
of
that
ta
and
add
it
to
the
token
and
then
just
somehow
legacy
I
land
on
the
master
node,
where
it
and
Cuba
and
Cuba
team
will
use
it.
So
I
think
it's
what
like,
instead
of
documenting
that,
if
you
really
do
this,
like
you,
have
this
kind
of
control.
So
it's
not
a
problem.
Yeah.
G
I
tried
actually
to
build
the
existed
data,
the
format
of
the
hash,
so
that
it's
easy
to
get
it
with
like
open,
open,
Excel
command
line,
so
it
should
be
pretty
amenable
to.
If
you
do
want
to
do
that
kind
of
orchestration
and
terraformer,
something
that
it
should
be
pretty
straightforward
to
get
the
right,
hash
value
and
copy
I
know
it
shows
the
other
caveat
with
that
approach.
Is
you
end
up
having
to
ask
the
secret
key
for
the
root
CA
cements
on
how
about
a
metadata.
F
I
think
in
terms
of
the
documentation,
I
think
what
we're
going
to.
If
you
look
at
the
sort
of
final
step
of
the
phases,
you're
either
have
to
pass
a
pin
or
you
have
to
pass
a
flag
that
says:
hey
I,
know
I'm,
not
passing
a
pin,
and
so
then
we
document
that,
if
you're
using
that
flag
that
says
I'm
not
passing
a
pin,
then
now
you're
operating
in
it
within
a
day
we
can
state.
G
Break
out
the
cases
where
one
is
like
the
manual
case
or
somebody's
experimenting
with
Cubano,
more
they're
running
it
on
hardware,
we
can
make
that
case
safe
by
default,
with
our
copy
paste
music
commands
around
and
then
the
other
case,
where
somebody's
building
some
orchestration
that
should
be
possible.
They
should
be
possible
to
make
that
fake
here
just
might
be
a
little
bit
harder,
because
you
need
to
do
some
orchestration
around
the
keys
instead
of
just
generating
toke.
Another
band
also.
F
L
G
So
if
that
secret
least,
and
actually
the
way
I've
got
this
implemented
right
now
awesome
code
to
share
pretty
soon
actually
doing
that
in
very
initial
insecure
connection
that
the
client
makes
to
get
the
cluster
info
and
then
added
some
code
in
there
that
saves
off
certificate
chain
from
that
connection,
so
then
go
through
you
validate
and
once
you
get
the
cluster
info,
let's
check
this
metric
H
Mac
just
like
before.
Then
you
check
the
key
pin
and
now
you
know
the
root
CA
cert.
G
F
But
Mike
I
think
if
we
take
a
if
we
take
a
step
back,
if
we
weren't
worrying
about
the
a
self
stitching
case
where,
like
all
the
informations
created
a
priori,
then
we
might
be
able
to
simplify
the
whole
thing
and
use
this
keep
in
as
the
only
way
to
actually
verify
the
bootstrap
info,
and
so
that
might
learn
but
I
think
doing
that.
But
then
also
supporting
you
know.
You
know
before
Hep
C.
F
Oh,
where
was
you
know,
using
tube
admin
to
actually
stitch
clusters
together
and
when
you
found
the
sort
of
self
stitching
stuff
he's
like
oh
man.
This
is
a
hell
of
a
lot
easier,
then
having
to
sort
of
extract
stuff
and
move
around
so
I
think
there's
there's.
Definitely
there's
a
usability
sort
of
you
know
packard
here
desk
that
he
kept
current
scheme
that
I
don't
wanna.
I
don't
want
to
throw
away
as
we
as
we
try
and
lock
this
down.
L
British
sure
what
I
would
want
to
see
explored
is
what
would
the
situation
be,
that
the
key
used
to
sign
the
JWT
is
in
the
best
web
designer
could
be
leaked
where
the
private
key
of
the
server
serving
certificate
would
not
be
leaked
like
what
what
situation
would
that
be
would
be
more
compromised
of
the
master?
It
seems
like
the
answer's.
No
to
that,
it
seems
like
that.
Just
leaked
both.
So
what
does
a
set?
What
attacks
does
this
prevent
beyond
what
we
can't
really
do
so.
G
G
The
private
key
of
the
root
CA
on
the
master
or
for
a
offline
and
then,
in
addition
to
like
a
full
worker,
node
compromise,
I,
think
that's
Tessa's,
that's
one
scenario.
Another
scenario
is
like
the
reason
we
have
these
discovery.
Tokens
is
because
we're
going
to
pass
them
through
some
kind
of
orchestration
to
the
worker
nodes
when
they
spin
up,
and
that
might
be
like
a
file
on
disk
on
a
hardware
or
maybe
like
ec2
metadata,
or
something
like
that.
G
L
L
F
I'll,
do
a
copy
over
to
Google
Docs
I've
been
doing
is
just
just
because
it's
like
super
easy
to
then
turn
it
into
a
markdown
quickest
way:
Cox,
oh
yeah,
they
all
suck
yeah.
So
I
put
it
in
the
comments.
The
other
thing
as
we
look
at
sort
of
like
as
we
move,
you
know
how
to
beta
it
and
and
try
and
lock
this
stuff
down
the
other
change
that
I'd
like
to
float
making
is
to
make
the
default.
F
So
when
you
do
a
tube
admin
in
it,
the
first
token
that
actually
gets
created,
as
the
unit
has
some
limited
lifetim.
Unless
you
pass
a
flag
saying
I,
want
to
restrict
this
down,
I'd
like
to
change
the
default
of
that
flag
to
something
that's
like
two
hours
or
something
like
that,
which
means
that
by
default
you
bring
up
a
cluster
and
that's
open
ages
out.
If
you
want
to
do
adding
those
later,
you
can
create
a
new
token
or
you
can
pass
the
command
line.
C
C
C
G
C
G
Hash
see
I
actually
will
be
both
and
I.
Instead,
we
also
talked
about
potentially
putting
the
new
verification
data
sort
of
lunch
together
into
the
same
so
the
one
token
that
just
is
longer
now
I
decided
I
liked
it
better
as
a
second
another
parameter
is
more.
It's
cleaner,
I
think
then,
to
understand
that
that
is
separating.
You
could
strip
off
that
flag
that
it's
additional
verification.
It's
separate
from
those
the
main
discovery
process
I
say:
do
something
convey
it
in
the
scenarios
where
you
can
do
this.
G
F
We're
on
our
way,
the
self
stitching
scenario,
I
think
is
incredibly
valuable
and,
in
terms
of
like
code
turn
and
changes.
This
is
purely
additive
on
the
client
side
versus
if
we,
if
we
wanted
to
remove
the
designer
you
have
to
you
know,
being
could
be
more
difficult
to
in
terms
of
more
impacting
yeah.
C
F
Other
thing
I
wish
we
had
done
and
we
probably
it
was
a
Miss.
Is
that
the
token
we
probably
should
have
given
the
token
some
sort
of
prefix,
like
you
know,
ka
for
cube
admin
or
something
like
that
or
you
know
kb4
kubernetes
bootstrap
and
then
use
that
as
the
key
and
the
authorizer,
because,
right
now
the
the
authorizer
just
looks
for
the
pattern.
It
might
be
worthwhile
to
make
that
change.
You
know.
D
F
F
It's
not
it's
not
a
thing
of
collision.
It
just
means
that,
like
so
so
so
the
way
that
the
authorizers
work
is
that
they
look
at
a
token
and
they're
like
a
bunch
of
them
like
hey.
Does
this
look
like
one
of
my
tokens?
Does
this
look
like
one
of
my
token
and
and
you
want
that
thing
to
be,
as
you
know,
as
reliable
as
possible,
and
right
now,
the
bootstrap
authorizer.
F
C
F
C
C
One
thing
which
is
directly
related
to
cube
admin
but
container
metrics
from
the
cubelet
metrics
endpoint
that
are
populated
from
C
advisor
have
disappeared
due
to
some
well,
it
wasn't
known
at
all
that
it
would
just
go
away
like
that.
We
haven't
any
testing
coverage.
It
seems
because
it
was
some
user,
that's
reported
it,
and
but
this
basically
means
for
cube
admins.
If
we
disable
the
public
see
advise
support
in
one
seven,
which
is
analytic
authenticated.
C
There
is
no
way
to
get
like
container
metrics
from
per
meter
or
whatever
in
one-seven-zero
at
least
I.
Think
saying
it
since
it
was
a
an
issue,
a
regression
from
one
six
I
think
it's
going
to
be
fixed
like
the
root
cause,
but
anyways
it
was
even
more
severe
with
cubed
and
clusters
needed
that
we
have.
The
public
see
advisor
4190
for
secured
so
yeah.
F
Think
we're
going
to
be
seeing
more
and
more
of
this
as
more
folks
to
use
cubed
men
if
there
are
problems
that
are
more
systemic
to
a
release.
You
know
you
know
we're
a
little
bit
more
on
the
bleeding
edge
in
terms
of
enabling
do
things
like
our
back
and
trying
to
lock
down
all
these
insecure
ports
and
stuff
like
that.
So
it's
going
to
be
it's
going
to
be
interesting.