►
From YouTube: 2017-10-10 17.04.10 SIG-cluster-lifecycle 166836624
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Okay,
so
this
wasn't
actually
my
proquest
but
I
think
Devon.
He
submitted
a
pull
request
to
sort
of
extract.
All
of
the
cube
ATM
code
into
staging
and
I
know
that
we
discussed
it
a
few
months
ago
on
a
prior
release,
planning
and
so
I
was
under
the
assumption
that
we
were
sort
of
one
release
behind
cube
cuddle,
because
we
wanted
to
see
how
they
would
handle,
like
all
the
sort
of
affiliated
like
dependencies
and
CI
and
stuff
like
that.
C
D
Name
is
nick
name
is
dance,
so
hello,
everyone,
so
I
was
chasing
something
else,
and
I
noticed
that
we
had
a
git
repository
for
cube
EDM
with
issues,
but
the
code
was
here.
Then
I
was
looking
at
whether
we
could
kind
of
like
stage
it
out,
just
like
the
other
repositories
that
we
are
pulling
out
of
the
main
repository
and
that
kind
of
worked
out
fine.
D
So
the
pull
request
itself
is
asking
all
the
tests,
except
for
the
qbd
mgc
test,
and
that
is
because
that
test
is
failing
for
everything,
both
the
CI
and
the
PO
cuban
artists.
Each
week
a
cube
a
dmg
see
both
tests
are
failing
for
all
jobs,
and
not
just
my
job
and
I
did
dig
into
that
one
a
little
bit
this
morning
and
it
seems
to
be
because
of
C
and
I.
Not
getting
initialized
I
haven't
been
able
to
drill
down
more
than
that,
because
there
are
no
cubelet
logs
available
on
this
job.
D
B
Terms
of
moving
our
code
out
I,
linked
to
the
dot
that
Jacob
wrote
a
little
while
back
called
extracting
cubed
min,
where
we
discussed
sore
the
rationale
for
waiting
I
mean
I,
think
that
technically
we
could
take
the
code
out
now
like
there.
There
are
no
dependencies
I,
think
that
would
break
it's
more
of
a
process
issue
where
there
are
lots
of
you
know
right
now,
since
we're
in
the
main
repo
we
get
built
as
part
of
the
official
releases.
B
You
know
we
are
tracked
as
part
of
the
official
releases.
They
block
their
leases
on
making
sure
our
tests
are
passing,
etc
and,
as
part
of
extracting
cue
petal
out
of
the
main
repo.
We
expect
him
to
solve
a
number
of
process
issues
around
code
being
spread
across
multiple
rico's,
which
we
didn't
want
to
deal
with
like
we
wanted
to
focus
on
actually
writing
code
and
building
product
instead
of
the
process
around
how
to
manage
code
across
multiple
repositories,
and
let
someone
else
solve
that
problem
first.
B
So
that's
why
the
plan
right
now
is
to
wait
until
cube
shuttle
comes
out
and
then
move
out
the
release
following
there.
So
you
can
pick
up
stock
with
sort
of
the
rationale
and
the
plan
and
if
you
think
once
I
have
to
say,
if
you
think
that
we
should
be
doing
it
a
different
way.
Please
comment
on
that
dock
and
and
propose
a
different
path
for.
E
D
The
idea
here
was,
we
can
move
it
to
staging
and
let
it
live
in
staging
and
we
can
choose
not
a
mirror.
Also
if
that's
of
concern,
but
anyway
it
was
an
experiment.
It's
up
to
you
all
to
figure
out
what
you
want
to
do
with
it
and
as
far
as
I'm
concerned
that
I'm
able
I
was
able
to
pull
that
off
in
the
sense
that
all
the
tests
are
running
and
things
are
building
and
seems
like.
We
have
a
good
separation
and
it's
easy
to
do
it
whenever
we
want
to
do
it.
B
There
were
a
couple
of
things
that
Jacob
I
put
in
a
doc
about
actually
modifying
the
build
rules
to
prevent
people
from
making
changes
to
the
main
repo
that
added
sort
of
spaghetti,
naseeb
admin
and
the
rest
of
the
code,
which
I
think
a
little
bit
of
that
is
starting
to
creep
in
a
little
bit
like
cube,
fed,
is
starting
to
import
some
of
our
constants.
So.
F
B
F
B
If
you
did
want
to
help
sort
of
push
the
the
process
forward
of
extracting,
it
might
be
useful
to
take
some
of
those
those
actions
of
making
sure
we
fully
isolate
our
code
in
the
main
repo,
so
that
when
we
are
ready
to
move
it
out
sort
of
process
wise.
It's
it's
a
very
simple
pull
request.
As
we
write
staging
staging
repo
and
go
straight
to
our
own
repo
as
temp.
D
G
E
Well
originally,
so
this
dis
you'd
have
to
go
to
the
API
machine.
Your
call
to
understand
how
many
different
repositories
are
plenty
of
split
up,
because
originally
it
was
three
now
I
think
it's
like
nine
or
ten
and
I.
Don't
I,
don't
I'd
follow
the
logic
anymore.
Only
only
the
Daniel
Stefan
David
EADS
are
pretty
much
the
source
of
truth,
maybe
maybe
maybe
the
source
of
truth
but
I
think.
G
It's
like
you
start
off
with
like
3
billion
and
then
you're
like
you
know:
it's
always
10
bin.
Everyone
knows
it's
10
billion,
you
say
three
billion
I
can
get
past,
but
everyone
knows
it's
ten,
but
yeah
that
that
is
okay,
yeah,
that's
true!
Yes,
it's
gonna
get
worse
before
it
gets
better.
I.
Guess,
though,
can.
H
H
I
Between
there
now
and
get
some
modules
is
right
now,
if
you
want
to
modify
the
code,
modify
it
in
the
staging
directory
and
the
mirror
happens
out,
so
you
can
make
one
PR
that
changes
like
something
in
the
state
new
directory
and
something
that
depends
on
it
and
the
single
PR.
Instead
of
having
to
do
the
thing
we'll
have
to
do
once,
it
is
actually
in
its
own
repo
I.
I
B
All
right,
well,
I,
think
the
the
TLDR
here
is
the
API
missionary.
Folks
are
pretty
much
the
only
people
that
know
multiple
repo
split
is
happening
preceding
where
it's
headed.
Some
people
are
interested
in
helping
push
that
along.
They
should
chat
with
the
folks
in
the
API
machinery,
sig
and
I.
Think
the
consensus
that
I'm
hearing
from
people
on
this
call
is
that
we
don't
want
to
get
embroiled
in
that
and
get
distracted
and
and
I
think
that's.
B
B
C
C
Saying
that
it's
kind
of
tricky
to
do,
and
you
know
it's
quite
difficult
to
extract
like
a
fully
qualified
domain
name
to
put
into
a
cert.
But
there
could
be
a
way
that
we
could
do
it
to
try
and
improve
the
you
axe
there
in
terms
of
that
in
it
operation,
so
just
kind
of
once,
with
just
as
a
canvas
feedback
here
on
what
people
think.
C
C
So
you
can,
you
can
append
sounds
into
the
TLS,
but
I
think
people
were
just
kind
of
I.
Think
their
argument
was.
Is
that
why
don't
we
make
like
like
an
assumption,
why'd
it?
What
what
doesn't
keep
out
men
make?
You
know
a
sane
assumption
and
then
insert
it
without
without
people.
Having
said
I
mean
I
personally,
don't
have
a
problem
with
with
the
additional
flag.
It's
just
that
I
thought
that
cube
admin
was
in
a
position
to
the
predicts
the
house
naked.
They
could
always
script
generate
that
config.
You
know,
I
mean
I,
mean.
B
C
I
think
the
only
way
I
could
do
it
is
to
try
and
extract
the
hostname
and
doing
DNS
like
trace
Nelson
tonight,
yeah
and
like
if
it
passes,
then
append
it
but
yeah
it
kind
of
feels
messy
to
me.
It
doesn't
feel
like
a
good
way
donor.
B
F
One
thing
that
would
be
useful
for
us
to
document
is:
let's
say
you
bring
up
a
cluster
and
you're
like
oh
crap,
now
I
need
to
add
a
SAN
right.
How
do
you
actually
do
the
cert
location
for
that?
That
leaves
search
for
the
API
server
to
actually
out
of
Sam,
because
I
think
that's,
that's
probably
a
fairly
common
thing
of,
like
oh
I,
didn't
realize
I
wanted
to
reach
it
through
this
name.
Let
me
add
something
to
it
and
you
don't
like
to
rebuild
the
cluster.
Yes,.
I
C
I
B
Excellent
and
with
well
actually
a
segue
into
Fabrizio's
topic
when
he
said
he
wasn't
sure
if
you'd
be
able
to
make
it
today,
but
he
did
put
forward
a
design
for
improving
our
docks,
which
is
brittle
long
lines
it
with
what
Jamie
was
just
saying
so
Jim.
You
should
definitely
take
a
look
at
this
other
folks
that
are
interested
in
making
our
docks.
Better
should
also
take
a
look
at
this.
B
This
is
awesome
that
Fabrizio
is
driving
us
forward,
because
docks
is
generally
a
sort
of
under
loved
part
of
the
project,
and
we
really
appreciate
people.
You
know
thinking
about
Doctson
trying
to
make
them
better.
It
is
sort
of
the
the
main
one
of
the
main
touch
points
for
users.
Right
users
will
run
a
command
and
then,
when
it
doesn't
do
what
they
expect,
that
will
go
look
at
docks,
and
so
this
is
just
as
much
a
part
of
the
user.
Experience
is
actually
the
code
isn't
executing.
B
C
Actually,
a
so
I
looked
at
that
dock
earlier.
One
thing
which
forbid
CEO
mention
is
that
a
potential
format
for
our
man
docks
could
be
similar
to
how
you
cuddle
dessert,
and
so
they
have
like
a
flashy
kind
of
gooey,
honey
thing
and
so
I
didn't
know
whether
that
would
be
suitable
for
us,
because
we
don't
actually
have
that
many
config
options
compared
to
a
few
cuddle.
So
yeah,
that's
something
we
need
to
think
about
to
you
like
whether
minimal
is
better
or
whether
we
want
you
know
all
the
things
in
the
UI.
C
B
And
pretty
much,
none
of
the
other
components
are
meant
to
be
executed
directly
by
people,
though
right
so
I.
Think
in
that
sense,
that
cube
admin
is
is
similarly
aligned
with
cube
cuddle
in
the
sense
that
we
expect
a
in
some
cases,
a
human
operator
to
be
executing
these
commands
and
needing
to
have
have
that
context
and
that
help.
Whereas
you
don't
really
expect
a
human
operator
to
execute
the
API
server
binary.
B
A
It's
not
exactly
a
happy
update,
so
you
know
like
we
agreed
I
have
tried
to
push
this
forwards.
We
could
get
the
design
discussion
close
and
then
actually
finish
the
implementation
and
get
this
feature
in
1/9
as
quickly
as
possible,
so
that
you
know
testing
could
proceed
with
self-hosted
upgrades
and
with
coop
ADM,
so
kind
of
the
the
main
issue
that
seemed
was
open.
Was
this
host
port
handling?
A
How
that
would
be
done
with
the
new
update
strategy
and
so
suggested
that
I
go
to
see
gaps
so
I
went
to
see
gaps
I
put
together
a
dock,
suggesting
two
different
ways
that
we
could
possibly
handle
it.
I
asked
them
for
comments
a
couple
times
and
then
meeting
yesterday,
no
one
really
had
looked
at
the
document,
except
for
you,
Robert
and
and
I
got
even
more
kind
of
just
pushed
back
kind
of
it
felt,
like
kind
of
people
were
finding
new
reasons
to
try
to
shut
this
proposal
down
a
bit.
A
You
know
a
lot
of
people
saying
well,
we
need
more
use.
Cases
was
one
big
criticism
for
adding
this.
Another
is
well.
The
apps
API
is
going
to
1.0
soon.
So
we
shouldn't
change
it
before
that
that
we
should
go
through
an
alpha-beta,
stable
kind
of
feature
gate
and
then
a
few
concerns
around
scheduling
unification
and
that
we
should
talk
to
six
scheduler
to
see
what
their
plans
are
in
the
next
year
and-
and
you
know
see
how
to
make
sure
that
our
changes
are
going
to
be
compatible
with
their
plans
and
so
on.
A
So
it
seemed
like
the
you
know.
We
kind
of
maybe
increased
the
number
of
blockers
rather
than
decreased
as
a
result
of
this
so
far
and
yeah
I
think.
The
next
step
that
was
proposed
was
actually
now
just
to
go
to
SIG's
scheduler,
so
we've
actually
increased
the
like
the
number
of
blockers
rather
than
decreased.
Unfortunately,
I
can.
E
Help
I
can
help
you
out
there.
There
is
no
plan,
at
least
in
the
short
term,
to
convert
demon
sets
into
schedule
items.
We've
talked
about
it
many
times,
but
there's
no
one
on
the
it's
not
a
higher
priority
item
that
we've
planned
on
dealing
with
in
the
in
the
near
term.
We've
discussed
it
many
times,
but
there's
always
been
a
resource
problem
and
other
priorities,
namely
priority
preemption
has
been
a
big
issue
that
we've
had
to
deal
with
and
will
affect
this
sig
as
well.
E
So
as
we
start
to
roll
out
elements,
we
should
start
to
add
priority
fields
once
we
go
to
beta,
but
other
than
that,
the
the
there
is
no
plan
to
do.
The
swap
another
comment:
I
wanted
to
make
is
a
lot
of
new
additions,
go
through
alpha
beta
stages,
even
for
fields
and
do
field
level.
Versioning
I,
don't
understand
why
this
would
be
a
case
for
that
field,
level,
versioning
and
modify
the
documents.
To
basically
say
this
is
an
alpha
field
and
primarily
for
specific
use
case.
E
A
That's
helpful
and
I
think
makes
sense,
but
just
more
from
a
tactical
side,
I
think
there's
just
a
lot
of
a
kind
of
people
who
are
concerned.
You
know
about
all
these
aspects
so
and
could
you
elaborate
who
the
people
are
just
I,
don't
know
everyone
exactly
so
I
have
to
kind
of
go
back
through
if
it's
recorded
and.
A
The
last
meeting
was
mostly
oh
and
SK
was
was
the
person
who
spoke
up
a
bit
and
then
I
think
the
other
people
who
are
having
concerned
have
commented
on
the
design
dog
PR,
which
is
linked
from
the
agenda,
notes
a
bunch
of
people
in
there.
You
know
and
I
think
yeah.
So
going
back
to
tactic,
it's
it's
been
hard
to
say.
Well,
what
precisely
would
you
like
for
us
to
move
past
this?
A
You
know
I
here
at
core
West
I,
don't
really
have
a
lot
of
bandwidth
to
nor
do
I
think
you
know
the
right
kind
of
in.
You
could
really
resolve
this
on
my
own,
and
so
this
is
a
really
important
feature
to
get
in
for
1-9
for
sure
I'm
gonna
have
to
humbly
request
some
help
to
push
through
some
of
these
more
kind
of
getting
people
on.
B
The
same
page
kind
of
effort
so
I'll
say
two
things:
one
I'm
happy
to
talk
to
to
kenneth
Owens
and
try
to
get
more
context
from
from
our
side
and
Clayton
also
mentioned
in
chat,
and
it
looks
like
he's
on
on
the
meeting
today
that
he
had
a
couple
questions
and
the
doctor
was
looking
for
some
more
context.
I
don't
know.
J
J
Cuz,
the
host
port
isn't
required
to
actually
find
it's
just
required
to
schedule
and
to
get
exclusion,
and
also
for
some
network
providers
to
do
that.
So
I
just
didn't
I
wanted
to
get
more
context.
I
can
think
that
we
can
talk
later,
but
I
would
be
happy
to
help,
because
this
also
comes
up
for
other
cases.
Did
sig
outs
does
care
about
so
I
just
want
to.
Oh.
J
Was
more
bootstrapping
like
if
you
want
to
bootstrap
a
C&I
plugin
able
to
use
host
port
to
get
it
from
a
CNI
plugin,
it's
not
running
yet
so
that
was
the
question.
I
had
asked
the
doc,
but
I
wanted
to
explore
a
little
bit
why
this
the
host
port
was
actually
required
versus
us,
potentially
kicking
us
up
a
level
and
saying
if
you
want
to
be
a
bootstrap
herbal
network
provider,
you
must
do
this
this
and
so
that
bootstrapping
can
consume.
You
rewards
those
you,
okay,.
F
B
C
A
J
If
you
specify
a
port
on
your
pod
and
specify
host
networking,
it
will
treat
it
as
a
host
port.
Okay,
if
you
omit
the
port,
you
can
still
build
a
service
from
it.
So
I
just
didn't
know
what
what
the
gap
was
like.
Maybe
there
was
a
were
you
relying
on
this
so
that
the
network
provider
would
do
something?
No.
A
I
mean
for
context,
this
was
actually
just
kind
of
from
sig
apps
people
reviewing
the
design
doc.
It
was
something
they
called
out
as
something
that
that
specifically
needed
to
be
handled
and
mostly
from
a
UX
perspective
kind
of
it.
We
need
to
end
this.
We
needs
to
be
have
a
good
story
around
it,
for
users,
and
you
know,
is
that
there
it'll
have
at
least
some
sort
of
parity
with
how
host
ports
are.
You
can
be
used
with
the
other
strategy,
basically
right.
J
I
just
didn't
know
whether
that
was
locking,
because
if
you
drop
the
port
from
the
pod
definition,
you
can
still
create
a
services
to
it.
You
can
still
bind
the
hosts
Network
and
expose
the
host
port.
You
just
have
to
do
a
little
bit
more
work
on
your
own
right
and
then
you
can
get
the
double
daemon
set
scheduling
what's
unit
and.
B
That's
basically
the
first
proposal:
the
dock
is
yeah,
have
it
if
it's
explicitly
gonna
conflict,
we
reject
it
out
of
hand,
and
if
people
need
to
work
around
that,
then
they
have
ways
to
do
that,
but
they
have
to
modify
their
code.
I.
Think
that's!
Okay,
I'm,
not
sure
why
we're
trying
to
make
this
system
so
smart
here
and
just
instead
of
just
telling
users
sorry
that
doesn't
work,
especially
if
it's
unneeded
for
the
one
use
case.
We
have
a
strong
reason
to
put
this
feature
in
for
yeah.
J
Fine,
maybe
we
can
circle
back
in
in
the
mirror
later,
Aaron
chat
or
something
later
today,
I
just
want
to.
If
there's
something
we
can
get
around
and
recommend
because,
like
it
is
valuable
for
bootstrapping
to
be
able
to
do
a
host
network
and
to
build
services,
and
so
they're
made
there
probably
is
a
case
there.
It's
just
like
I
think
this
SIG
apps
definitely
had
that.
This
is
a,
but
the
new
strategy
is
a
very
big
new
thing
versus
the
you
know
might
be
something
to
be
considered
in
110
and
if
there's
a
small.
E
J
E
There
any
reason
why
we
can't
mark
it's
an
existing
field
right
with
a
new.
We
don't
actually
have
a
good
versioning
policy,
I
would
say
even
four
fields
themselves
but
add
modifications
to
enumerations
two
fields.
Do
we
even
have
a
formalized
policy
and
API
machinery
to
add
these
things?
The
market
as
elf.
J
Oh
right,
adding
a
new
enumeration
to
alpha
is
actually
very
scary.
That's
the
one
that
we
probably
can't
do,
because
you
can't
help
but
get
exposed
to
it.
If
someone
turns
it
on,
but
again
an
alpha
feature
behind
the
gate,
it
would
probably
be
okay,
so
I
mean
really.
Everything
now
is
just
it
has
to
be
behind
the
gate,
and
if
you
turn
the
gate
on,
you
will
break
you
when
we
change
that,
like
you,
don't
get
any
guarantees.
That
seems.
J
A
So
I
guess
in
terms
of
near
tour
term,
like
what
can
we
do
in
the
next
week?
I
don't
know
if
maybe
Robert
or
others
have
thoughts
like
was
my
is
my
dog
kind
of
missing
the
mark
on
some
level
or
you
know,
is
there
another
way
I
should
try
to
approach
siga,
apps
or
other
folks
like
I
yeah
I.
Don't
personally
have
a
clear
next
step.
E
J
Think
the
fundamental
argument
from
say
gasps
was:
if
you
don't
want
host
ports
to
conflict,
don't
declare
them
and
everything
should
work,
just
don't
get
magic
network.
Those
Hornets
Rosen
at
major
problem
networking.
So
there's
another
angle
there,
which
is
the
networking
sig
probably
is
like
I,
think
it's
kind
of
a
reasonable
that,
like
I,
think
you're
right,
exclusionary,
behavior
there,
but
you're
already
gonna
have
to
wait
for
all
the
networking
plugins
to
change,
even
if
we
change
the
behavior
of
that
for
the
exclusion
stuff,
because
they'd
still
have
to
respect
it.
J
B
Yes,
oh
dude,
I'm,
not
sure.
Let's
tell
you
about
next
steps
like
I
said:
I'll
I'll
try
to
circle
around
with
with
Kenneth
and
at
Clayton's
gonna
start
a
conversation
and
again
slack
later
and
we'll
try
to
figure
out
the
best
path
for
here
I
mean
I,
guess
I'm
I'm,
hoping
that
there's
a
path
that
doesn't
involve
coordination
between
us,
so
yaps
and
segment
working
because
it
seems
like
that's
gonna,
be
really
slow.
Yeah
well,
it'd
be
nice.
C
B
A
K
B
I
Yeah
yeah,
so
this
was
sort
of
an
oversight
and
1-800.
We
started
uploading
that
config
config
map
with
the
Canadian
configuration
of
one
item
in
that
configuration
was
the
initial
token.
Like
a
default
token,
when
you
do
in
it,
this
is
I
mean
we
were
sort
of
lucky
that
we
we
dropped
the
default
TTL
from
infinite
down
to
24
hours,
so
most
installations
is
actually
how
they
keep
steel
at
all,
but
Fabricio
did
a
PR
to
strip
that
token
the
uploaded
configuration
that
also
applies
on
upgrade.
I
F
I
E
I
You
want
to
take
a
look
there's
in
the
cuvette
diem
issue:
485.
There's
that
proposed
kind
of
known
issues.
You
know
for
the
release,
notes:
I,
don't
I,
don't
want
to
alarm
I,
don't
think
I
don't
want
to
be
alarmist.
I,
don't
think
this
is
a
huge
deal,
but
I
think
it's
something
that
I
want
to
know
about.
If
you
installed
a
1-800
stir.
B
B
B
Yeah
so
he's
asking
you
to
tag
him
on
the
issue
or
hit
him
up
on
slack,
so
yeah.
Please
follow
up
right.
So
next
was
a
question
about
integrating
cue,
badminton,
cue
bench.
So
cue
bench
is
basically
a
analysis
tool
for
commands
clusters
to
tell
you
if
they're
configured
securely
and
I
think
Lucas
was
proposing.
We
we
run
something
you
pick
into
that
or
that
exactly
I
can't
tell
if
he's
proposing
on
our
CI
tests
or
on
released
versions.
It
looks
like
liz
is
on
the
call
from
you
pinch,
team,
yeah.
L
Hi,
unfortunately,
Lucas
doesn't
appear
to
be
on
the
call
so
I.
L
Since
I've
joined
I
might
as
well,
you
know
raise
the
topic.
Yes,
so
and
cue
bench
is
can't
me
not
part
of
the
committee's
project,
although
we're
totally
open
to
trying
to
figure
out
how
to
make
that
happen.
We'd
be,
and
you
know,
there's
whatever
processes
need
to
happen
to
make
that
happen,
and
we
haven't
done
that
yet.
L
But
I
guess,
setting
that
aside
and
Lucas
had
said
is
you
know
it
would
be
good
to
find
some
way
of
integrating
it
into
the
Kuban
automated
tests,
we're
totally
open
to
that
I,
don't
know
what
it
would
require.
I
think.
That's
probably
all
I
can
say
at
this
point
that
so
I
guess
I
just
wanted
to
at
least
say
that
we're
open
to
that
very
much
thing
can.
L
This
probably
lies
about
200,
page
spec,
so
there's
a
ton
of
tests,
you,
basically
just
it's
a
girl
application
and
we
put
it
packaged
up
as
a
as
a
container.
So
it's
easy
to
install
and
the
tests
are
configured
as
ya
knows,
files
and
you
just
basically
run
cube
Adam
master
or
keep
Adam
node,
depending
on
what
kind
of
node
you're
on,
and
you
get
a
list
of
pass/fail,
possibly
information.
L
If
there's
no
clear-cut
results
and
a
set
of
remedial
steps
that
you
could
take
to
correct
or
to
bring
yourself
in
line
with
the
CIS
recommendations,
the
nature
of
those
tests
is-
and
you
know
sometimes
people
might
have
good
reasons
for
not
wanting
to
comply
with
all
of
those
features.
But
you
know
in
the
absence
of
a
good
reason,
they're
probably
wise
security
measures
to
take.
B
So
we're
we
found
over
time
that
passing
lots
and
lots
of
flags
the
binaries
has
made.
This
is
very
difficult
to
configure
consistently.
You
know
there
were
sort
of
moving
configuration
for
each
assisted
some
components
into
what
we
call
component
configuration,
which
is
configuration
that
lives
inside
of
the
committee
system
generally
in
config
maps,
and
so
in
1.9
we're
targeting
the
cubelet
to
use
dynamic,
cubed
config,
where
the
cue
ball
will
start
off
with
very
minimal
configuration
and
read
the
rest
of
its
config
from
a
config
nap.
You
know
the
cue
proxy
already
does
this.
B
The
scheduler
is
moving
towards
this
direction.
The
auto,
auto
scaling
team
is
moving
towards
Direction
flow
and
D
is
moving
towards
this
direction,
so
we're
sort
a
lot
of
the
system,
components
that
we
run
and
trying
to
shift
them
so
that
they
don't
require
a
lot
of
command
line
flags.
The
upshot
would
be
that
now
to
sort
of
audit
the
system.
Instead
of
having
to
run
something
on
every
single
node,
we
should
generally
be
able
to
just
run
something
centrally
that
talks
the
API
server
and
audits
all
of
the
configs
for
the
system.
B
L
G
A
It's
nicer
development
people
on
the
call
in
the
past
have
wanted
that
the
problem
is,
we
need
to
build
a
version,
a
config
Z,
which
is
version
2
right
now
that
config
season
4
isn't
version.
So
it's
not
a
stable,
API
and
I
would
not
recommend
that
anybody
build
tooling
around
it,
but
I
think
M
toughen
was
working
on
it.
He
has
a
public
design
doc
on
that.
He
circulated
about
that.
I
could
pull
it
up
and
paste
it
into
the
chat.
If
you
guys
are
interested,
it's.
G
F
A
G
At
the
same
time
like
this,
is
this
is
Deming
a
valuable
effort.
We
shouldn't
like
say
that
that,
like
the
means
by
which
we
get
those
flags
might
change
over
the
next
six
months,
but
I
I
think
this
is
cool.
I'd
love
to
see
this
in,
like
I'd
love,
to
see
this
against
cops
and
I
think
we
should
try
to
get
this
into
all
the
ete
tests,
like
the
québec
ones
as
well.
Yeah.
G
Someone
did
was
they
made
the
log
dumper
in
cubic
run
as
a
daemon
set.
I,
don't
know
if
it's
got
some
updates
or
not,
but
it
runs
as
a
daemon
set
and
therefore
it
can
run
on
each
node
and
it
does
it
in
parallel,
which
is
really
sneaky.
So
it
might
be
overkill
for
this
case
to
test
every
node.
But
you
never
know.
Maybe
one
will
be
different.
Buddy.
G
B
Okay,
well
sounds
like
there's:
if
we're
starting
to
run
out
of
time
host
and
there's
a
couple
follow-ups,
we
should
do
this.
Like
Justin
was
saying
this
team
is
valuable
to
integrate
with
sort
of
any
of
our
edu
testing,
not
just
cube
admin,
but
you
gotta
start
somewhere.
So
if
Lucas
is
excited
about
getting
us
and
integrated
that,
maybe
he
can
push
there
or
you
should
talk
to
Justin
about
trying
to
get
it
working
with
cops
and
we'll
try
to
pour
it
to
the
other
ones
as
well.
Yeah.
E
B
Thanks
so
next
is
the
thing
I've
been
putting
off,
because
I
didn't
realize
my
could
join
the
call,
which
is
solutions
for
adding
labels
to
newly
joined
nodes.
So
it
looks
like
there's
a
feature
requests
and
there's
an
option
to
do
this
on
the
join
command.
So,
if
you
join,
you
know
you
could
pass
additional
labels
we'd
send
to
the
API
server.
B
There's
another
issue
about
tanks
and
note
IPs
and
there's
a
discussion
in
sig
off,
which
is
part
of
this
I
lived
for
Mike
in
is.
He
also
has
a
proposal
for
how
to
sort
of
flip
this
on
its
head,
as
we
start
to
trust
nodes
less
and
gives
loads
nodes,
less
scope
in
terms
of
centralizing
things
like
setting
labels
and
taints
from
the
control
plane
and
not
from
the
nodes
themselves.
A
A
A
A
Which
is
ongoing
work
by
Jacob
Simpson
I?
Don't
think
any
of
these
features
can
proceed
with
the
current
self
reporting
mechanism.
So
my
hope
is
that
we
can
move
label
initialization
and
taint
initialization
and
address
initialization
from
the
cubelet
to
a
central
controller
that
we
trust
that
we
can
trust
a
little
bit
more
I,
don't
know
if
it
blocks
anybody
from
using
the
cubelet
label
or
keep
late'
taint
flags
I
think
that
eventually
somebody
will
enable
a
central
initializer
for
notes
and
those
flags
will
just
stop
working.
A
F
So
one
of
the
problems
is
that
q
Batman
doesn't
actually
set
cumulus
lags,
so
we
assume
the
key
was
already
up
and
running.
So
that's
maybe
I,
don't
know.
How
does
that
you
know
if
that
tank
flag
is
reflected
into
the
component
config
stuff
that
is
centrally
controlled?
That
seems
a
little
bit
bouncy
I'm,
not
sure.
F
A
So
I
would
have
recommend
reading
the
doc
to
get
further
detail,
but
it
is,
it
presumes
the
existence
of
a
machine
database
and
strong
note,
identity,
wind
notes,
joined
so,
for
example,
in
AWS.
If
the
node
join
procedure
was
gated
on
the
node
producing
the
VM
identity
doc,
we
have
a
pretty
strong
guarantee
that
of
the
identity
of
the
node
as
it
joins.
And
then
the
central
controller
can
go.
Look
at
the
Machine
database,
which
is
the
ec2
API
verify
the
zone
taints
verify.
A
Hopefully,
maybe
the
OS
image
and
version
which
is
is
different
than
today.
So,
for
example,
take
the
kernel
version,
an
example
that
I
mentioned
earlier.
Somebody
wanted
to
report
kernel
person
so
that
they
could
write
a
label
on
or
a
selector
on,
a
replica
set
that
would
prevent
their
application
from
running
on
nodes
with
kernel
versions
that
had
known
vulnerabilities
that
doesn't
make
sense
in
the
current
model
and
I
think
allowing
people
to
try
to
build
security.
F
F
Like
the
VA
Pia,
the
AWS
APR,
or
what
have
you
or
there'll
be
situations
where
some
of
that
machine
database
will
be.
You
know
intrinsic
to
the
cluster,
so
I
can
see.
That
being
you
know,
a
configure,
I
think
Maps
as
part
of
the
implementation
that
goes
through
this
labeler
and
so
then,
in
the
action
for
the
user
fee.
They
do
a
Cuban
mint
in
it
and
then
they
actually
do
it.
You
know,
I,
you
know
cube
a
very
cute
control
sort
of,
like
you
know,
add
machine
entry
or
something
like
that.
F
A
C
Don't
think
it's
been
so
there's
different
characteristics
which
people
have
shown
interest
in
to
override
one
of
them
is
abel's
and
other
is
taints,
and
then
there
is
the
node
IP.
That's
the
flag
sent
into
the
couplets,
so
I
don't
think.
We've
gone
as
far
as
thinking
about
which
clients
would
be
necessary
to
do
that,
or
indeed
like
whether
we
would
need
a
client
whether
we
could
just
into
the
component
config
or
something
like
that
right.
G
To
a
central
controller
that
applies,
it
is
an
admission
controller
or
something
that
gates
it
right,
and
it
seems
that
it
seems
that
will
always
have
user
level
labels
and
taints,
because
users
have
a
desire
to
like
segment
there
they're
clustering
to
different
types
of
machines,
for
example,
so
you
will
always
have
the
ability
that
you
always
need
ability
to
add
your
own
labels
at
some
stage
when
programs
have
protected
labels.
Yes,.
A
This
is
specifically
for
notes
so
system
node,
our
system,
:,
node,
:
a
hostname,
so
these
identities
will
no
longer
be
able
to
set
labels,
IP
addresses
or
taints,
and
removing
that
that
echo
list
would
break
the
current
cubelet
flags,
so
a
user
would
still
be
able
to
if
they
had
permission
going
update
labels.
The
notes
couldn't
self
report.
F
You're
like
okay,
that's
kind
of
bad,
and
so
what
that
means
is
that
there
cannot
be
credentials
on
the
machine
for
setting
node
labels
to
some
to
some
definition
of
node
labels,
whether
to
protect
it
or
not.
I
mean
like
yeah,
but
the
meaning
of
labels.
There
were
people
starts,
doing
work
on
that
and
then
you're
going
to
get
in
a
bad
scene.
The.
F
C
A
C
A
A
A
cap
net
admin
on
a
host-
you
can
middle
man
cute
control,
execs,
which
is
bad
worse,
would
be
enabling
this
I
guess
it's
a
question
of
priority.
There's
there
are
quite
a
few
security
holes
in
kubernetes.
We
need
to
prioritize
which
ones
we
start
patching
I'd
be
happy
for
somebody
to
start
working
on
this
I.
Don't
think
it
is
prioritized
over
other
security
features.
C
Mike,
would
you
mind
putting
a
link
to
that
talk?
You
you
mentioned
into
the
either
the
meeting
notes
or
the
comments
section.
That's
right
sure.
B
K
C
I
don't
think
it
is
because
they're
pretty
required
a
little
bit
work
on
our
part
and
they
doesn't
really
make
sense
if
the
mid
long
time,
it's
gonna,
be
something
completely
different.
Yeah
I'm
kind
of
in
also
interesting,
like
this
until
controller
like
how
coupled
it
is
to
sort
of
an
external
cloud
API,
because
if
we're
referencing,
my
AWS
or
OpenStack
API
is
for
metadata,
then
how's
that
going
to
work
if
I'm
not
integrated
with
cloud
right
so
yeah
that
was
one
so
I
had
the.
A
A
G
A
I
I've
had
this
exact
use
case
at
a
bank
where
I
had
sort
of
high
integrity
workloads
like
accounting,
databases
and
low
integrity,
workloads
like
internal
Facebook,
app
I
didn't
want
them
to
be
co-located,
so
I
had
separate
node
pools
with
high
integrity.
So
this
was
like
our
design
proposal.
A
design
plan
for
this
right,
low
integrity,
work,
a
pool,
high
integrity
worker
pool,
but
with
the
current
system,
there's
no
way
to
stop
the
low
integrity
worker
pool
from
upgrading
itself
into
a
high
integrity
worker
yeah.
Exactly.
B
All
right,
so
we
have
a
minute
left.
I,
just
I
didn't
want
to
make
sure
that
that
conversation
got
started,
even
though,
even
if
we
can't
finish
it
now,
I
know
last
thing
on
the
agenda
is
that
CI
jobs
are
broken,
I
just
tagged
cluster
lifecycle
bugs
on
that
issue,
cuz
I'm,
not
sure
people
have
noticed.
Yet.
If
anyone
has
some
spare
cycles
to
go,
take
a
look
at
that.
It
looks
like
the
issue
is
asking
for
help
in
a
couple
of
places.
B
And
with
that,
we
are
just
about
out
of
time,
so
I
think
we're
gonna
call
it
I
think
their
own
for
coming
there's
another
meeting
at
10:00
for
cube
admin
adoption,
so
people
are
happy
to
hang
around
for
that.
Otherwise,
we'll
see
people
either
tomorrow
with
the
breakout
meeting
or
next
Tuesday
thanks.