►
From YouTube: Kubernetes kops office hours 20200424
Description
Recording of the kops office hours meeting held on 20200424
A
Hello,
everybody.
This
is
cups
office
hours
today
is
Friday
April,
24th
2020,
happy
birthday
to
my
sister,
whose
birthday
it
is
and
I'm
your
moderator,
facilitator,
Justin,
Santa,
Barbara,
I
work
at
Google,
a
reminder.
This
meeting
is
being
recorded
and
will
be
put
on
the
Internet
and
to
their
properties,
mindful
of
our
code
of
conduct,
which
is
essentially
to
be
a
good
person
and
try
to
respect
the
agenda
and
we
have
put
an
agenda
I
put
a
link
to
the
agenda
in
the
zoom
chat.
A
Please
feel
free
to
add
your
name
attendees
list
and
add
any
items
you
would
like
to
discuss
under
the
agenda.
We
do
have
a
very
full
agenda,
so
please
do
put
your
items
on
the
agenda
at
the
appropriate
place,
so
we
can
be
sure
to
get
through
them
all
I
think
we'll
get
through
them
all.
But
yes,
so
we
might
need
some
sing
to
the
agenda
side,
discipline
and.
B
A
B
Manager
creates
certificates
that
uses
that
are
valid
for
one
year
and
there's
no
autorotation
support
built
into
it,
and
a
CD
manager
was
introduced
in
almost
a
year
ago
in
I
think
Constance
112,
maybe,
and
so
there
are
many
people
running
a
speed
manager
for
about
a
year
now
and
their
skirts
are
going
to
expire
soon,
which
will
result
in
an
outage
unless
they
manually
rotate
them.
So
I.
C
Have
about
that?
We've
been
running
cops
since,
like
3d
112,
like
probably
a
single
digit
and
I
downloaded
it.
A
couple
of
our
certificates
from
the
s3
bucket
I
ran
open,
SSL,
inspect
on
them,
and
they
seem
to
have
an
expression
of
ten
years.
So
I
assume
we're,
probably
not
an
impact
that
this
is
only
for
user
set.
Their
initial
cluster
creation
was
after
112
with
that.
Is
that
correct.
A
I
think
I
can.
Let
me
go
so
personal.
Sorry
about
the
issues.
The
there
are
two
types
of
certificate:
there
are:
the
CA
certificates
and
other
certificates
like
for
API
server,
they're
generated
by
the
cop
CLI
tool.
Okay,.
A
Years
because
I
knew
that
we
would
like
you
know
the
one
year
is
not
enough,
and
there
are
also
in
sed
manager.
It
dynamically
generates,
not
the
roots
which
have
not
the
CA
certificates
for
a
TD
communication,
but
server
certificates
and
here
certificates
and
those
are
the
ones
which
I
think
have
one
year
validities.
A
What
I
had
meant
to
do
and
did
not
do
was
not
persist,
those
so
that
they
would
be
rotated
as
long
as
you
had
as
long
as
a
pod
didn't
last
for
more
than
a
year,
which
was
perhaps
aggressive
in
and
of
itself,
but
the
idea
that
they
were
persisted
is
doubly
bad,
so
I'm
glad
thank
you,
Peter
for
catching
us
before
hit
everyone.
I
said
someone
put
up
a
patch
to
rotate
them
automatically
if
they
have
less
than
60
days
left,
which
I
think
is
good.
A
I
was
suggesting
actually,
basically
in
the
absence
of
it,
looks
like
I
had
a
look.
It
would
be
a
little
bit
harder
to
just
not
persist
them,
so
I
was
suggesting
to
just
regenerate
them
if
they're
more
than
24
hours
old.
So,
in
other
words,
it
only
does
the
check
when
you
first
start
regenerating
they're
more
than
24
hours
old.
We
could
also
go
to
two
years
or
three
years
if
well,
if,
if
we
run
happy
about
having
to
rely
on
restarting
or
the
SED
process
or
pod
restarting
at
least
a
year
every
year,.
A
In
theory,
some
processes
will
reload
the
certificate.
I
am
not
convinced
that
there's
been
a
lot
of
bugs
around
this,
so
it's
probably
not
something
to
rely
on,
especially
if
it
only
happens
every
two
years
or
every
year
to
someone
right,
that's
gonna,
be
a
really
difficult.
One
to
figure
out
I
feel
like
people
probably
should
be
restarting
their
masters
at
least
once
a
year.
A
A
A
Think
that
probably
the
more
important
thing
to
do
is
fear
when
we
do
about
all
the
people
out
there
that
are
like
on
the
clock.
Obviously
we
should
fix
a
CD
manager
and
do
that
wait.
I
can
back
for
it
as
far
as
we
can.
It
might
even
be
worth
doing
a
115
back
board
for
this
and
then
like
a
cops,
115
back
port
I,
don't
know
if
we
could.
We
could
post
the
workaround,
which
is
to
write
the
SSH
or
exactly
and
delete
your
certs
and
restart
right.
I
broke,
there's
much
more.
B
A
E
A
A
A
We
can
keep
it
at
one
year
and
then,
like
the
the
fact
that
the
sed
manager
will
like
drop
is
almost
a
feature
like
saying
like
hey
you
haven't,
you
haven't
restarted
your
ed
CD
mentor
in
a
long
time.
It's
sort
of
an
obnoxious
like
a
reminder,
though,
like
to
have
some
downtime
at
whatever
time
in
the
morning
or
whatever
it
is
that
you
happen
to
have
last
restarted
at
a
year
ago.
That's
all
its
city.
D
A
A
A
A
So
if
we
move
forward
with
the
pr2,
basically
regenerate
them
on
startup,
even
if
they
do
persist
more
or
less
all
the
time
eventually
like
the
24-hour
thing
is,
if
you're
in
a
crash
tip,
he
won't
constantly
be
generating
SSH
keys
or
RSA
keys,
which
is
sort
of
expensive
the
health
check
and
we're
sort
of
tepid
ongoing
from
when
you're
two-year.
We
definitely
need
a
new
image.
We
may
need
to
back
port
other
cops
versions.
D
A
A
A
A
Yes,
we
probably
could
force
a
newer
version
of
cops
and
I
think
we
might
even
better
do
a
link
to
a
thing
sort
of
yeah
I
feel
like
it's
definitely
worth
doing
for
the
112
versions
like
if
you're
running,
cops,
112,
you
probably
yeah,
and
then
yes,
definitely
yeah
yeah
I.
Think
that's
a
great
idea.
I
think
that's
a
good
good
approach,
that's
what
we
do
for
like
security
vulnerabilities,
and
this
is
arguably
we've.
We've
already
done-
that
for
smaller
security
vulnerabilities
than
like
your
cluster
going
down.
So
this
is
a
yeah
I.
D
A
Impact
will
be,
you
are
forced
to
upgrade
cups,
and
if
you
don't
want
to
upgrade
cups,
you
have
to
pass
one
of
the
environment
barrels
which
is
like
cops
from
an
obsolete
version
or
something
so
it
will.
It
will
break
CI
if
you're,
if
you
don't
update,
will
temporarily
break
your
CI.
If
you
don't
update
your
cops
version.
A
A
A
D
In
newer
versions,
it
actually
tries
to
upgrade
the
packages
for
something
to
install
them.
Okay,
at
some
point,
it
fails
to
run
up
get
or
maybe
just
some
parts
are
gone
and
not
everything.
The
idea
is
that
it
affects
also
build,
not
just
one
one
ten,
but
let's
say
you
want
to
build
some
of
the
docker
images
they
are
based
on
very
old
Jesse
stuff.
A
A
A
A
Yeah
I'll
have
a
look
at
that
and
then
the
but
separately
talk
about
the
utils.
Yes,
it
would
be
great
to
so.
The
you
tosser
Ebola
don't
know
is
we
have
some
OSS,
don't
include
some
important
binaries
hawkman.
It
looks
like
you
called
out
so
captain
contract,
which
sounds
right,
and
so
we
try
to
build
those
statically
linked
and
put
them
on
the
host
OSS
where
they
have
to
be.
A
D
I
made
my
account
in
Google
cloud,
so
I
tried
the
COS
image
I
found
both
the
binaries
there
in
the
latest.
One
I
could
find
so
I
would
say
that
this
one
is
working.
Okay,
there
was
an
issue
with
the
flat
car
I
also
joined
it
and
they
included
both
well.
They
included
contract
because
so
cat
was
already
there
and
it
will
be
in
the
next
release,
probably
in
a
week.
So
it's
okay
and
core
OS
is
gone
for
good
and
the
Fedora
guys
are
considering
including
it
in
their
Fedora
core
OS.
D
They
want
first
to
be
split
into
contract,
D
and
contract
binary,
so
I
think
that
will
take
forever
because
it's
Red,
Hat
and
but
I,
don't
think
we
should,
and
there
is
that
comment
that
I
don't
really
understand
the
details.
So
I
don't
know
why
these
were
needed
in
the
first
place,
but
it
says
there
that
in
kubernetes
starting
1.19,
some
things
won't
need
those
binaries
anymore
or
at
least
on
track.
A
A
There
is
now
another
distro
called
federal
course,
which
is
different
right
and
so
any
users
of
core
OS
of
the
district,
formerly
known
as
core
OS.
We
should
divert
a
flatcar
mm-hmm
and
we
should
not.
We
should
not
necessarily
assume
that,
like
we
support
Fedora
core
OS,
just
cuz,
we
supported
core
OS.
Is
that
fair,
I.
D
A
Surprisingly,
difficult
to
build
you
static,
binaries,
static
versions
of
the
binaries
from
the
upstream
packages,
mm-hmm
or
I,
found
it
difficult.
There
may
be
people
that
are
very
familiar
on
how
to
do
this
and
I'm
just
like
well
tell
me
you
just
passed
out
D
flags
or
whatever
it
is,
but
yes
it
would.
It
was.
It
was
not
as
trivial
as
I
hoped
it
would
be.
The.
D
Official
announcement
on
RedHat
site
is:
don't
do
static
binaries
in
general,
so
I'm,
meaning
for
this
kind
of
thing,
where
you
start
to
bundle
pretty
much
a
lot
of
stuff
just
to
get
some
commands
going,
okay,
I
think
I,
don't
know
how
do
you
want
to
do
it?
Do
you
want
to
look
into
dropping
this
or
I
should
just
make
a
PR
and
assign
it
to
you?
What.
A
But
why
don't
we
start
off
by
not
shipping
the
utils
on
the
distros,
where
we
believe
it
not
to
be
needed?
I
costs
and
flat
car.
Even
if
a
car
is
gonna
be
like
a
week
away
and
then
we
can.
We
have
that
grid
of
tests
now,
so
we
might
actually
able
to
verify
that
we
are
okay
and
then
that
would
be
great
and
then
it's
just
in
core
OS.
We
is.
D
A
Yeah
yeah
I
mean
we
the
we
should
def.
We
should
certainly
tell
people
and
we
have.
We
have
the
support
for
it.
We
have
way
like
the
flat
car
thing
right,
mm-hmm,
yeah,
I,
don't
know
what
the
right
approaches
should
be
deprecated.
Should
we
I
don't
want
to
force
someone
to
I,
don't
know.
A
A
I
can
turn
them
off.
I
can
do
that.
Okay,
and
we
could
certainly
want
support.
You
tolls.
Well,
you
certainly
won't
add
so
captain
contract
on
arm
64.
Okay.
If
that
unblocks
you
and
then
we
can
deprecated
at
least
core
OS
it
we
Kris
get
flack
are
working
and
then
we
say,
look
go
use
flag.
Our
blackberries.
A
F
A
D
D
There
is
a
possibility
to
put
it
in
more
places
the
channels
file
same
as
we
have
the
binaries
and
the
images
and
I
was
thinking
of
a
way
to
even
not
pull
the
channels
so
for
people
that
don't
really
want
to.
Basically,
the
channel
just
gives
you
the
recommended
kubernetes
version
and
the
image
the
recommended
image
right.
Yeah.
D
A
Yeah,
the
you
certainly
can
so
I
think
what
you
commented
Peter
or
was
it
Peter
that
commented
in
the
issue
is
good,
which
is
yeah
like
I.
Think
the
key
point
is
we
don't
use
the
channels
file
from
github
other
than
when
you're
running
the
cops
binary.
So
once
you've
created
your
cluster,
there
is
no.
There
should
be
no
dependency
on
github.
A
Github
is
in
the
list
of
Mirrors,
but
it's
in
a
list
of
mirrors.
It's
not
it's.
Not
there,
there's
a
fullback
right.
So
there's
it's
one
of
many
sources
to
pull
them,
so
the
it's
only
when
you're
running,
create
cluster
or
update
cluster
or
zoom
ibly
like
to
leave
cluster
as
well
for
bizarre
reasons,
and
so
that
that's
I
think
important
to
tell
people
that
might
be
worried
about
this.
You
can
also
like
easily
override
the
channels
file
with
your
own
copy.
A
If
you
want
to
like
put
it
on
your
own
s3
bucket,
the
the
problem
is
it's
one
of
our
like
authoritative
sources
of
truth.
It's
not
I,
don't
know
whether
it's
a
cryptographic,
root,
I,
don't
think
it's
one
of
our
cryptographic
roots
as
it
were.
A
A
In
other
words,
they
were
the
cops
releases
whether
the
criminals
releases
are.
We
will
be
in
the
same
situation
as
in
like
you
can't
bring
up
your
cluster
without
that
location
being
accessible
anyway
or
a
mirror
location
being
accessible,
and
so
we
will
be
no
no
worse
off,
and
perhaps
people
will
feel
more
comfortable
with
the
infrastructure
around
the
kubernetes
releases
than
they
are
around
like
github.
By
pulling
roll
from
github
is
like
less
protected
than
the
kubernetes
releases
are.
A
Don't
know
whether
we
are
there
for
like
hitting
some
sort
of
rate
limiting
thing
as
a
project
that
is
causing
to
be
particularly
bad,
because
if
so
there's
we
have
like.
We
have
three
mirrors.
Now
we
have
artifacts,
we
have
the
s3
bucket
and
we
have
github
and
if
we,
if
I,
have,
if
I
forgot,
I
think
the
order
is
artifacts,
I
think
the
orders
yet
artifacts
github
s3
bucket,
and
so,
if
artifacts
isn't
populated,
we're
putting
more
load
onto
github,
and
it
might
be
that
they
are
throttling
us
know.
A
D
A
D
F
E
A
A
A
B
A
True,
yes,
yeah,
it's
it's
I
think
it's
documented
on
how
to
use
the
Alpha
Channel,
which
is
you
just
say,
channels
equals
alpha,
but
like
the
idea
that
that
can
actually
be
a
fully
qualified
URL
instead
is
not
documented
I,
don't
think
we're,
certainly
not
like
best
practices
for
like
reliably
like
isolating
yourself
from
anything
else
in
the
world.
Yeah.
A
D
A
E
A
A
A
E
A
A
A
Which
is
a
yeah,
tricky
change,
sort
of
in
terms
of
strategy
for
our
management
of
those
manifests?
Okay,
I,
don't
think
we
can
make
a
decision
right
now,
I
feel
like
yes
in
theory,
we
should
go
to
tour.
Dienes
I,
think
the
the
if
we
have
to
do
it
along
with
the
user
that
will
probably
be
riskier
and
I'd
rather
come
back
to
119.
If
we
can
split
it
out,
I
think
we
can
probably
do
it
in
money.
That's
unreasonable
to
people.
D
D
D
D
A
I
think
I.
There
are
some
there
with
nobody
later,
the
but
yes
I
think
if
we
can.
If
we
look
at
the
two
and
then
we
can
make
a
decision,
maybe
in
next
time
round,
I
have
a
bigger
image,
which
I
think
is
a
nice
way
to
build
things
so
I'll
throw
that
in
the
mix
as
well.
But
we
I
certainly
take
your
point
that
it'd
be
nice
not
to
have
to
not
to
have
to
build
the
image
ourselves,
but
it
is
also
a
nice
thing
we
can
offer
to
users
so.
D
A
Is
there
is,
though,
the
the
project
that
is
actually
spitting
up
to
build
these
more
automatically,
so
there
is
actually
momentum
there,
but
yeah
I
feel
like
let's
get
some.
Let's
get
some
data
on
the
various
options.
I.
Thank
you
for
having
me
a
bunch
of
20
or
for
stuff,
even
before
the
launch.
That
was
awesome.
I,
think
that
was
you
right
and
yeah,
that's
so
cool!
So
now
we
have
like
some
signal
already
and
yeah.
It
certainly
looks
like
Ubuntu,
20
or
less.
A
If
you
candidate,
something
bastard,
looks
like
if
you
candidate,
I,
don't
know
if
anyone
else
hasn't
either
preferred
ones,
but
those
are
the
two
that
I
see
is
the
most
likely
candidates.
It
is
nice
to
have
a
recommended,
a
recommendation,
it's
nice,
so
people
don't
have
to.
We
don't
just
like
give
you
just
a
choice
that
they
aren't
really
done
of
any
data
to
make
with
which
to
make
that
decision,
and
so
that
would
be.
That
would
be
my.
We
should
gather
that
data
and
then
make
a
decision
to
make
a
recommendation.
I.
Think.
B
Yeah,
this
is
just
a
high-level
thought.
I
had
we
could
add
an
ICD
manager,
version
to
channels
and
then
kind
of
a
sign.
Cops
version,
assign
yeah,
assign
cops
version,
ranges
to
a
city
manager
versions,
so
that
people
could
more
easily
upgrade
their
XD
manager
without
having
to
download
a
new
cops
police
unless
something
like
the
rest
of
the
manifest
also
needed
to
change.
But
if
it's
just
the
container
image
changing,
maybe
that's
something
we
didn't
channels.
That's.
A
A
good
idea,
I
think
I
also
had
a
PR
a
while
ago
to
actually
put
the
manifests
up.
There
I
think
it
was
in
a
different
file,
so
it
will
make
like
Fargo
for
anyone
mirroring
channels
so
before
I
should
figure
out
at
the
same
time
that
figured
that
out
at
the
same
time,
yeah
I
think
it's
a
good
idea
to
not
like
did
a
couple.
These
things
in
general,
yeah
I,
think
that's
a
really
good
idea.
I,
don't
know
whether
we
should
do
just
the
version
or
whether
we
should
pull
out
the
whole
manifest.
A
A
A
Okay,
I
have
the
next
one,
which
is
I,
so
I
uploaded
all
our
YouTube
videos
as
of
two
weeks
ago,
so
we're
up
to
date
on
some
of
them.
I
accidently,
left
comments
enabled
and
for
once
we
have
that
worked
out.
Well,
someone
posted
a
good
comment
that
we
should
try.
The
github
action
supports
Mac
OS.
So
last
time
we
talked
about
Travis
being
problematic
and
the
reason
why
we
continued
use
it
is
because
of
github
is
because
of
Mac
OS,
and
maybe
we
can
try
to
get
kid.
A
Have
actions
going
to
try
to
give
us
some
Mac
OS
coverage
I
briefly
googled
it.
It
seems
like
it
was
true,
so
that
would
be
a
great
thing
for
anyone
to
contribute
if
they're
interested
in
adding
a
github
action
to
run
I.
Think
basically,
the
thing
we
care
about
on
Mac
OS
is
that
I
can
go,
build
command,
cops,
that's
they
may
be
make
cops,
but,
like
those
are
the
the
build
that
is
done
when
I
install
it
using
a
Mac
package
manager
whose
name
is
homebrew
sorry
hadn't
homebrew
hum
broom.
A
A
Also.
The
next
item,
which
is
there,
goes
I
added
some
more
tests,
so
John
brought
up
the
final
issue
previously
I
looked
into
it.
It
is
messy
I
heard
some
more
tests
to
try
to
basically
get
some
coverage.
I
sort
of
I'm
porting
them
agreed
because
it's
basically
like
just
doing
like
every
permutation,
it's
showing
us
that
we
are
doing
okay
when
we
like
on
our
stock
configuration
and
we
doing
okay,
if
I
set
just
the
cni
and
we're
doing
okay
if
I
set
just
the
distro
I.
A
A
So
we
have,
we
can
certainly
add
a
lot
more
jobs
by
going
from
daily
to
weekly
and
I
feel
like
we
still
have
capacity
to
do
some
more
permutations.
If
there
are
other
things
like
if
we
want
to
take
tests,
H
a
masters
or
if
you
want
to
test
private
topologies,
for
example,
we
can
throw
those
in
the
other
one
is.
We
could
also
test
older
kubernetes
versions,
and
we
can
also
test
older
cops
versions.
A
At
this
point,
though,
the
grid
is
now
exploded,
so
I
don't
know
where
I
don't
know
if
it
will
still
fit
in
the
68
hours
that
we
have
in
a
week,
it
probably
won't,
but
we
might
be
able
to
like
go
up
to
monthly
scheduling
or
something
like
that
or
anyway.
It
seems
like
it's
an
interesting
way
to
gather
a
lot
of
data.
It's
not
as
good
for
like
I,
know,
Peter
and
kind
of
done
great
work
to
like
make
sure
we
actually
test
the
ones
we
really
care
about.
A
A
A
It
might
be
a
month
old,
but
we
have
a
run
and
we
can
like
look
at
the
sis
cuddles
and
compares
his
cuddles
across
distros
or
across
working
versus
not
working,
and
we
can
add
more
things
like
Cisco
tools
to
the
mix
when
we
figure
out
what
the
other
things
are.
I
did
by
clicking
East
to
settings
because
we
know
that's
a
likely
candidate,
but
I
didn't
do
that.
D
A
The
four
contacts
there
were
I
think
there
were
two.
There
are
two
issues
that
we
ever
are
sort
of
known,
one
that
we
spotted
was
in
the
PR.
It
was
in
that
year
that
someone
said
about
a
month
ago,
possibly
even
longer,
but
which
is
the
red.
Rel
distros
changed
some
sis
cuddles
to
not
put
bridge
traffic
through
iptables
I'm
waving
my
hands
a
bit,
but
it
changes
that
wasn't
a
witness
incompatible
with
flannel,
and
so
this,
the
first
fix,
was
to
basically
fix
those
assist
cuddles.
A
The
thing
which
gives
me
pause
is
it
supposed
to
report
it
in
the
journal
in
the
systemd
journal,
and
this
is
logs
and
I
have
not
yet
seen
that
many
of
our
failing
tests
and
we
do
see
failing
tests
that
are
failures
of
API
server
to
torture
services,
which
should
be
the
same
case
as
has
been
reported
elsewhere,
where
host
network
pods
cannot
talk
to
services.
So
I
don't
know
if
anyone
is
happily
running
flannel
on
rel
or
CentOS.
But
if
you
are
I
would
love
to
know
how
you're
doing
it,
I
suspect
you're.
A
A
But
like
that's,
why
also
I
did
like
I
wrote
one
awhile
ago,
which
is
is
now
very
similar
to
flannel,
like
it
uses
VX
on
overlays,
so
like
that's,
why
I
threw
that
one
into
the
mix
so
that
we
could
like
compare
like
flannel
versus
funnel
the
XM
versus
my
DX
I'm
copy
of
the
x11
and
like
see
what
the
difference
was?
If
there
were
differences
anyway,
alright
I
couldn't
so
there.
We
are
there's
an
agenda
and
then
maybe
a
release,
plan,
discussion
and
nine
minutes
after
Hackman
arm
64
support.
This
is
great
news,
so.
D
I
made
it
at
least
for
worker
nodes,
there's
still
things
to
do
in
general,
like
if
we
want
to
get
masters
also
to
support
arm,
it's
not
huge,
but
I'm,
not
fluent
in
best
basil.
So
that's
a
challenge
other
than
this
I
used.
The
different
approach
than
it
was
previously
I
know
suggested
in
code.
Instead
of
trying
to
figure
out
what
the
instant
group
architecture
is.
I
just
pushed
the
binaries
to
the
host
and
let
no
doubt
figure
it
out.
D
So
basically,
it's
pretty
easy
to
decide
the
architecture
of
the
host
once
you
start
running
there,
and
did
you
also
tweak
the
batch
script
to
like
look
at
it?
So
basically,
I
sent
both
AMD
and
armed
script
to
the
batch
script.
It
checks
your
name
M
and
after
that
it
loads
the
appropriate
node
up,
which
is
already
already
knows.
The
architecture
is
built
for.
Oh,
that
sounds.
A
A
B
So
this
PR
is
just
to
allow
us
to
create
clusters
by
using
that
CBD
managed
cilium
configuration
purely
with
create
cluster,
see
like
legs,
so
that
we
can
test
that
in
our
brown
jobs,
because
we
can't
provide
a
manifest
to
crowd
jobs,
we
can
only
provide
created,
clustered
flags,
so
I
think
it
overall
looks
good,
but
I
just
wanted
to
get
more
eyes
on
it
before
we
ever
do
see
like
like.
So
someone
else
can
take
a
look
at
that.
A
B
A
I
I'm
working
on
the
generalization
of
the
override,
but
I
definitely
don't
want
to
be
like
supporting
arbor
tree
like
patches.
There
are
better
ways
to
do
that,
so
if
that
makes
sense
to
me
I'll
just
ie
I
would
have
a
look
to
see
if
it's
possible
to
like
generalize
the
concept
of
having
additional
at
CDs,
but
it
probably
is
not
worth
it.
A
A
Alright,
so
in
our
five
minutes
remaining-
oh
that's
actually
quite
nice,
we
will
have
a
look
at
the
someone
pace
at
the
release
time
for
the
upcoming
two
weeks.
Thank
you.
I
think
this
is
yep
place
it
from
previously
I
did
do
the
1/16
release.
There
was
a
request
for
that.
So
that's
crossed
off.
Thank
you
and
then
I
think
the
other
ones
have
just
carried
forwards.
A
I
have
sort
of
been
blocked
on
trying
to
understand
the
state
of
flannel
I
feel
like
if
it's
pre-existing,
we
should
just
roll
forwards
with
a
release
note
but
I.
We
also
don't
yet
have
the
test
coverage
to
prove
that
it's
pre-existing,
so
maybe
I
need
to
throw
a
kubernetes
one
one
six
into
the
grid
and
double
it
immediately,
but
I
don't
know
John
you
you've
like
taking
up
the
mantle
of
flannel
yeah.
A
Good
stick
I
like
that
yeah.
That
is
good,
but
yes,
I
feel
like
if
it
existed.
If
it's
the
same
behavior
as
it
was
in
that
same
1/16,
we
should
put
notes
in
there
saying
that
I
feel
like
right
now.
The
note
would
be
pretty
long
is
like
this.
These
set
of
combinations
are
known
to
be
bad,
and
so
that's
a
little
I
want
to
have
a
look
at
the
grid
and
see
what
I
can
whether
I
can
tackle
any
of
those
but
I.
A
Don't
think
we'd
hold
up
the
releases
for
that,
and
so
yes,
it's
the
other
items
on
the
list.
That
I
think
would
be
a
great
goal
for
the
next
two
weeks.
So
another
one
18
alpha
the
117
release,
which
is
blocked
on
figuring
out
the
status
of
flannel
and
stuff
and
Buster
a.m.
eyes,
are
getting
closer.
B
A
D
F
A
A
A
very
different
way,
I
did
put
up
my
PR.
You
should
have
a
look
at
sort
of
interesting
it's
the
idea
is
you
can
build
a
image
using
docker
mm-hmm,
and
then
you
can
wrap
it.
You
can
basically
convert
it
to
a
disk
image
you
basically,
so
you
install
the
kernel
in
your
docker
container
or
something
to
do
anything,
and
then
you
like
copy
it
into
a
disk
image.
The
advantage
of
this
is,
we
have
relatively
self-contained
tooling
that
we
are
relatively
familiar
with.
A
So
the
thing
I
don't
like
about
bootstrap
easy
is
like
a
lot
of
the
logic
was
very
very
opaque,
like
you
had
to
like
dig
through
layers
and
layers
and
I
I
feel
the
same
way
about
the
the
new
official
Debian,
fully
automated
install
thing,
it's
similarly
opaque,
so
I'm
hoping
we
can
find
something
a
little
bit
less
opaque
or
we
just
use
the
upstream
images
and
say
this
isn't
our
problem.
We
want
to
solve
this
problem.
D
D
A
Is
a
lot
we
do
retain
them.
We
do
pretend
them
for
quite
a
long
time
period,
but
I
think
we
retain
them
on
a
sort
of
decreasing
drug
granularity.
It's
supposed
to
decreasing
granular
rescale,
so,
like
I,
think
we
keep
all
of
them
for
the
past
24
hours
and
then
like
hourly
for
the
past
week
and
daily
for
the
past
month.
Let's
say
right,
it's
so
it's
supposed
to
look
like
that.
I,
don't
know
what
it
looks
like.
So
that's
what
would
be
interesting
to
compare?
A
E
E
A
Typically,
do
is
we
actually
just
put
them
in
the
Alpha
Channel
first
and
we
let
them
we
let
them
run
through
the
the
tests
grid.
The
tests
are
you
de
jobs
and
then,
if,
if
everything
looks
good
and
no
one
reports
any
problems
at
the
Alpha
Channel,
we
basically
promote
it
to
the
stable
channel
after
between
one
and
two
weeks.
If
it's
a
critical
security
issue,
we
can
just
go
straight
to
stable,
I,
don't
believe,
there's
a
security
issue.