►
From YouTube: Kubernetes SIG CLI 20201216
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today
is
december
16th-
and
this
is
another
of
our
bi-weekly
six
cli
calls.
My
name
is
mate
and
I'll
be
your
host.
Today
we
have
a
pretty
light
agenda,
so
let's
get
to
it
right
away.
First
of
all,
big
congratulations
and
shout
outs
to
brian
zao
and
siam.
A
Or
show
basically,
so
thank
you
very
much
for
all
your
hard
work.
Your
your
input,
your
time
that
you
spend
on
reviewing
and
improving
keep
cuddle
is
invaluable.
Thank
you
very
much
and
you
totally
deserve
these
rewards.
A
I'm
pretty
sure
that
the
majority
of
us
will
be
busy
thinking
about
new
year's
eve
rather
than
interested
in
working
on
60
light
related
stuff,
but
we'll
see
each
other
right
after
the
new
years.
If
I
check
correctly
january
13th
is
the
next
call
when
we
will
be
having
so
I
guess
that
all
when
it
comes
to
announcements
introductions,
do
we
have
any
anyone
new
to
this
school?
I'm
looking
through
the
list-
and
I
think
I
recall
all
of
the.
B
Names
but
if
you're
new
feel
free
to
speak
up.
C
Seeing
out
I
want
I
mean
I
wanted
to
so.
I
worked
for
apple
recently
joined
them.
I'm
looking
forward
to
do
some
contributions
in
six
cli,
I'm
an
sre
so
that
naturally
influenced
my
come
to
the
productivity
and
more
here
to
improve
the
improve
my
goal
and
contribute
to
the
community.
In
a
way
I
can
so
happy
to
join
and
happy
to
contribute
looking
forward
thanks.
A
A
A
Okay
hearing
none,
then
we
can
jump
over
to
the
topics
the
first
one
was.
We
shortly
started
talking
about
it
last
time,
but
we
decided
to
push
it
over
to
this
time,
since
that
requires
a
bigger
discussion.
I
remember
sean
since
tim
asked
you
in
the
first
place
I'll
I'll.
Let
you
talk
about
the
proposal.
D
Sure
so
I,
as
I
mentioned
before,
normally
don't
bring
twitter
and
to
the
meeting,
but
this
was
you
know
this
is
tim
hawkin
asking
they
were
having
a
thread
which
is
linked
here,
asking
about
whether
or
not
what
what
could
be
done
to
keep
users
from
running
containers
as
root,
and
one
of
the
suggestions
was
that
maybe
the
output
from
coop
control
would
be
a
different
color,
say
red,
and-
and
so
I
thought
you
know,
because
this
is
tim-
that
we
would.
D
It
may
be
useful
to
at
least
consider
it
and
and
see
what
see
what
the
rest
of
the
community.
D
D
E
D
Yeah,
I
don't
think
we
came
to
any
conclusions
if
I
recall
correctly.
A
We
were
close
to
the
end
of
the
the
meeting
and
we
we
wanted
to
discuss
this
more
in
depth,
so
we
moved
it
over
to
the
next
call.
E
Like
maybe
the
best
way
to
proceed
is
someone
writes
a
proposal
or
the
those
that
are
interested
in
part?
You
know
having
input
in
the
solution,
write
a
proposal,
for
you
know
how
we
could
make
this
better.
There's,
probably
a
couple
different
possibilities
from
writing:
a
log
message
to
standard
error,
to
colorizing
the
output
to
something
else.
D
And,
and
I
think
that
there's
also
a
link
to
a
colorizing
plug-in-
is
that
correct,
with
coupe
color.
E
Yeah,
that's
yeah
and
that's,
I
think,
there's
that
discussion
as
well,
and
these
should
definitely
be
talked
about
together,
but
probably
also
don't
want
to
like.
E
A
Coloring
is
very
personal
preference,
and
this
probably
goes
back
to
doc's
proposal
about
having
a
configuration
directory
and-
and
we
could
probably
have
some
options
around
coloring,
which
would
include
both
picking
the
right
colors
and
matching
the
colors
with
the
theme
of
your
terminal,
because
there
are
people
that
prefer
darker,
colors
and
others
prefer
lighter
colors.
So
that's
that's
one
angle
and
a
general
approach
that
I
personally
think
that
should
be
approached
in
the
first
place.
A
Secondly,
the
the
question
is
about
how
much
coloring
we
actually
want
to
implement
in
in
get
for
starters,
but
I'm
guessing
that
this
will
also
apply
across
the
board
to
be
consistent,
so
whether
we're
coloring
just
single
columns,
whether
we're
coloring,
the
entire
rows
or
particular
elements
of
the
output,
I'm
pretty
sure
that
we
won't
be
able
to
satisfy
all
and
there
will
be
as
many
opinions
about
what
should
be
colored
and
how
should
be
colored.
E
I
mean
we
could
maybe
start
out
with
just
a
lint
command.
I
I
imagine
that
the
you
know,
category
of
things
that
we
may
want
to
guide
the
average
user
towards
doesn't
end
at
security,
and
it
doesn't
end
at
like
this
particular
security
issue,
and
they
may
want
like
an
automate
like
much
like
there's
golan
and
such
like
running
as
part
of
their
cicd
pipeline,
like
coloring,
actually
wouldn't
necessarily
resolve
that.
E
I
mean
like
so
we're
detecting
somehow
we're
detecting
this
right,
like
based
on
the
static
analysis
of
the
configuration.
E
So
if
you
run
like
two
control
lint
and
then
give
it
either
a
you
know,
dash
f,
you
give
it
configuration
or
give
it
the
pot
or
whatever
it
reads
the
configuration
and
then
exits
non-zero
if
it's
running
as
rude
and
prints
to
standard
error
or
standard
out
whatever
the
go
linters
do
you
know
the
in
a
message
for
each
item
that
doesn't
pass
linting
and
potentially
we
could
add,
like
sort
of
like
the
golang
ci
lint
file,
where
you
could
probably
enable
and
disable
different
lenders
and
configure
them
so
that
you
can,
as
part
of
your
get
ops
pipeline,
enforce
certain
things
before
it
hits
opa
or
something
like
that
or
no
was
it
what's
the
open
policies?
A
So
you
would
basically
separate
the
coloring
from
of
cube
color
output
as
a
separate
topic
and
in
parallel
solve
the
problem
of
lending
because,
like
you
probably
said,
different
people
will
be
looking
at
different
things
in
their
cluster.
A
So
yeah,
I
think
that
would
be.
That
would
be
a
original
approach.
E
Anyway,
we
spent
about
15
minutes
on
this
I'd,
be
happy
to
engage
in
a
deeper
discussion
about
the
technical
merits
of
different
approaches.
Here
I
think
this
might
be.
Some
items
might
come
out
of
this.
That
would
be
great
for
new
contributors
to
pick
up
and
that
we
could
direct
the
work
on
some
of
the
things
will
be
easier
than
others.
E
Maybe
we
could
schedule
like
sean?
Maybe
you
could
lead
point
on
just
like
organizing
the
effort,
not
necessarily
taking
on
all
the
negative
purposes,
but
set
up
a
separate
meeting
or
doc
or
discussion
for
us
to
collaborate
on
in
this
issue.
D
Yeah
I'll
organize
that
and
I'll
quickly
off
the
top
of
my
head
suggest
minutes
after
our
meeting
just
so,
we
don't
have
to
try
to
find
a
separate
time,
so
it
would
be
like
10
15.
You
know,
10
to
10,
15.
D
A
Sounds
great,
perfect,
try
thinking
with
ahmed,
if
phil
mentioned
that
he
was
looking
at
it.
Maybe
he
has
some
kind
of
a
proof
of
concept
or
some
initial
sketches
around
how
this
could
look
like
it.
Knowing
how
ashmet
approaches
topics
I'm
pretty
sure
that
his
input
on
the
topic
might
be
valuable.
F
Yes,
do
you
mind
if
I
share
my
screen,
so
I
can.
F
Okay,
so
this
is
just
like
a
really
short
presentation
on
some
open
api
features
and
customize.
F
F
So
this
is
a
problem
for
custom
resources,
because
the
built-in
schema
only
has
information
about
built-in
types,
so
customize
doesn't
handle
custom
resources
correctly.
So
here's
an
example:
it's
based
on
a
actual
like
user
issue,
that's
currently
open.
So
on
the
left,
I
have
my
custom
resource
and
on
the
right
I
have
a
customization
file
with
a
patch,
and
what
I
want
to
do
with
this
patch
is
just
change
the
container
image
to
nginx
so
I'll.
Let
you
guys
look
at
this
for
a
second,
but
it
doesn't
do
what
we
want.
F
So
the
desired
output
is
on
the
left,
where
what
we
want
is
just
the
image
to
be
changed
to
nginx,
because
the
patch
should
be
merged
with
the
resource,
but
instead
of
merging
together
it
just
completely
overrides
the
entire
entire
container
field.
F
So
our
proposed
solution
is
to
allow
users
to
specify
their
own
open
api
schema
files
via
an
open
api
field
in
the
customization,
where
they
can
put
a
path
to
their
schema
and
that
schema
should
contain
all
the
necessary
information
for
that
custom
resource.
So
as
a
user.
What
I
would
have
to
do
is
first
apply
my
custom
resource
to
my
cluster.
F
Then
I
would
have
to
get
the
open
api
data
open
api
data
from
that
cluster
and
put
it
in
a
file,
and
this
will
contain
the
schema
information
for
my
custom
resource,
and
then
I
can
change
anything
about
the
schema
that
I
want
and
then
put
that
in
my
customization
and
yeah.
That's
the
entire
proposal
or
the
questions
comments.
C
E
I
mean
it
might
already
be
supported
and
it
might
not.
I
trust
that
you
dug
in
and
there's
no
obvious
way
of
doing
it.
It'd
probably
be
hidden
through
some
weird
environment
variable
if
it
was
supported,
the
intent
was
to
always
provide
this
capability.
So
if
it's
not
there,
then
adding
it
makes
sense
to
me
it
should
be
a
relatively
minimal
change.
E
The
biggest,
I
think,
probably
the
most
challenging
aspect
of
this
is
going
to
be
the
like
bases
like
recursive,
traversal
and
and
like
having
conflicting,
open
api
definitions
between
you
know
some
base
you
use
and
it
has
defines
open
api
definitions
and
then
the
parent
finds
its
own,
and
this
should
get
merged
properly,
but
it's
possible.
It
could
conflict.
A
So
I
have
a
question:
yes
about
the
fetch
open
api
data
from
cluster.
Can
we
have
a
it
can
be
an
option
within
a
customized
file
or
an
option
to
customize
as
a
flag
during
a
location
to
force
it
to
read
the
open
api
data
from
the
cluster
instead
of
from
a
file.
F
E
E
A
And
then
check
it
in
okay,
that's
her!
I
was
just
curious
because
there's
one
another
question
that
I
have.
I
remember
that
I
struggled
that
with
that
in
the
past
as
well.
Can
you
go
back
to
the
example
that
he
had?
I
don't
know
that
was
like
fourth
or
fifth
slide
yeah.
A
I
think
it
was
previous
one,
so
the
problem
with
that
was
about
merging
the
arrays
or
maps
if,
if
they
don't
contain
a
proper
information
about,
what's
the
key
that
they
should
be
looking
after,
but
I'm
guessing
that
that's
already
expressed
in
the
open
api
and
that's
the
only
missing
bits
that.
H
So
funny
enough,
I
actually
started
working
on
a
cube.
Control
schema
command.
That
will
let
you
get
the
open
api
schema
to
standard
out
and
it
will
also
print
out
the
json
schema
version
of
that,
because
they're
apparently
very
different
and
used
for
different
tools.
So
this
is
motivation
for
me
to
keep
working
on
that.
E
A
Okay,
cool
eddie
yeah
go
ahead.
A
Cool
eddie,
you
had
the
two
issues
from
last
blocker
bug.
You
want
to
talk
about
them.
F
I
This
is
just
a
quick
heads
up
about
a
different
cup
that
I
would
like
to
submit.
Probably
in
early
january
internally,
we've
been
working
on
a
new
api
client-side
api
kind
that
is
based
on
customized
configuration
functions
and
it's
sort
of
all
oriented
around
customized
plugins
and
we
are
building
it
entirely
with
customized
primitives.
I
So
we
thought
that
this
could
actually
be
a
really
useful
thing
for
the
community
as
well,
and
we
wanted
to
get
community
feedback
on
it
and
see
if
there's
interest
in
in
having
us
upstream
it.
So
we
are
working
on
the
cap
right
now
and
just
wanted
to
give
everyone
a
heads
up
that
we'll
be
opening
that
in
early
january
and
so
that
it's
not
coming
out
of
the
blue
and
happy
to
answer
any
questions
about
it.
I
Well,
we
will
be
opening
like
a
full
cap
with
all
the
details,
so
I
I
could
yeah
that
would
probably
be
early
january
when
we
have
that.
Why.
A
Yeah,
that's
right!
Oh,
we
can
just
mention
that
during
the
next
call
or
whenever
the
the
cap
will
be
up.
Okay,
awesome.
E
I
think
the
idea
was
the
the
point
of
mentioning
it
before
it's.
There
is
like
before
we
write
up
this
long
cap
that
talks
about,
like
you
know,
you
know
going
from
the
al
alpha
to
beta,
to
ga
you
know
and
testing
and
like
all
this
stuff
right
folks,
who
would
be
interested,
we
could
probably
just
like,
have
it
like
reach
out.
If
this
is
of
interest
to
you,
and
then
we
can
include
you
in
the
collaboration
process
like
as
early
on
as.
F
H
So
these
two
came
up
during
bug:
scrub
they're,
both
kind
of
related,
so
tldr
of
this
issue
that
we
need
to
talk
about
is
we
currently
use
a
kind
list
inside
of
queue
control
to
represent
a
mixed
resource
type
list?
H
A
Cube
cuddle
actually
invokes
that
many
get
calls
as
many
resources
you
pass
and
then
combines
those
into
big
list,
and
for
that
reason
we
are
using
the
the
list
kind,
the
generic
one,
I'm
I'm
guessing
that
this
goes
back
to
the
very
early
days
when
cube
cuddle
was
working
against
the
internal
types
and
we
just
apparently
missed
that
one
when
we
were
externalizing
the
entire
cube
cuddle.
A
So
obviously
this
is
a
a
breaking
change.
A
So
if
we
decide-
and
that's
probably
my
proposal
for
today
to
go
with
plain
yum
or
jason
list
that
will
have
to
be
properly
documented
and
it
will,
as
usual,
be
behind
the
flag
and
after
a
couple
of
releases,
we
will
flip
the
flags
and
eventually,
in
a
couple
more
releases,
we
would
remove
the
the
other,
the
old
behavior
that
we
currently
have.
A
Yeah,
what
I
meant
is,
if
you
do
a
a
row
get
to
the
cluster.
The
cluster
returns
you
as
a
resource.
It
will
return
you,
the
the
pod
list
resource
keep
cuddle,
always
wraps
those
into
the
generic
list.
Sorry,
if
I
I
wasn't
concrete
enough
about
that,
one.
E
There's
there's
like
the
ghost
struct
right,
which
is
a
go
a
pod
list
and
then
there's
like
the
actual
api
definition
that,
like
one,
would
like
one
could
theoretically
implement
in
another
language
like
java
right
so
like
much
is,
is
there
an
api
concept
of
pod
list
like
do
we
have
that
anywhere
in
the
open
api
that
says
pod
list
instead
of
list?
A
If
you
do
get,
but
not
a
cube,
cube
cut
will
get
but,
like
a
plain,
get
against
a
pods
from
the
api
server,
their
response
will
be
pod
list,
with
pods
being
the
items.
A
And
that
applies
to
every
single
resource
when
we
are
defining
the
types
there's
always
a
single
resource
type
such
as
pod
and
a
accompanying
list
type,
which
is
for
pod.
It's
a
pod
list
which
basically
embeds
item
list.
This
applies
and
I'm
pretty
sure
that
if
you
do
explain
on
the
pod
list,
that
should
actually
work
just
fine
and
I
actually
can
quickly
check
this
one
up.
D
I
have
a
quick
question,
while
you're
looking
that
up
mache
so
so,
apparently
we
didn't
run
into
a
problem
with
our
moving
code
to
staging
a
lot
of
a
lot
of
what
we
did
when
we
moved
the
code
to
staging
was
to
remove
internal
types,
but
this
internal
type,
the
list
apparently
we
didn't
have
to
for
it
seems
interesting
that
that
particular
one
didn't
have
to
be
removed.
A
The
so
officially
the
problem
boils
down
to
the
fact
that
you
cannot
explain
the
list
type.
I
just
double
checked.
If
you
do
cube
cattle
explain
pod
list,
you
will
get
an
information
that
pod
list
is
a
list
of
pods
and
there's
the
information
about
fields
which
is
api
version
items
kind
metadata
so,
which
is
the
usual
things
you
would
expect
from
any
other
resource.
A
So
theoretically,
there
is
an
api,
but
it's
not
being
explainable,
because
I'm
pretty
sure
that
we
don't
produce
open
api
for
it,
and
this
is
why
eddie
was
reaching
out
to
sig
api
machinery,
because
maybe
that
was
a
a
missing
bit
on
their
side,
that
the
v1
list
is
not
being
exposed,
but
it
turns
out
that
it
it
was
never
meant
for
public
consumption
for
public
consumption,
the
api
folks,
imagine
the
potlus
or
your
resource
list,
and
this
is
how
the
entire
api
machinery
works.
A
A
simple
approach
that
we
could
work
with
just
right
away
is
documenting
that
this
is.
This
is
the
way
how
cute
cuddle
works.
These
days
and
either
have
an
additional
option
which
will
flatten
the
list
to
always
return,
go
yum
or
json
lists
or
work
with
the
current
one
or,
if,
if
we
think
that
it's
reasonable
to
do,
we
can
just
go
all
the
way
to
change
the
current
list
representation.
E
Let's
just
document
what
that
discussion,
like,
I
think,
for
the
immediate
term,
if
the
request
is
what
is
like,
I
want
to
find
what
this
thing
is.
If
we
say
hey,
this
is
mostly
an
internal
concept.
It
maps
to
these
other
things.
Then
we
can
decide
if,
like
we
need
to
do
additional
follow-up
work.
A
Per
eddie,
can
you
can
you
take
the
follow-up
to
document
this
one
or
I'm
not
sure?
If
the
author
of
the
issue
is
interested
in
adding
cube
cuddle
bits
of
docs.
H
H
A
A
tl
tl
dr,
I
now
recall
why
why
I
wanted
to
do
go
with
the
with
the
plain
yum
list.
A
The
tldr
for
this
issue
is
the
original
author
did
a
get
across
all
namespaces
for
pots,
which,
in
a
reasonably
sized
cluster,
can
be
pretty
big,
and
the
problem
with
that
is
keep
cuddle,
even
though
it
requests
resources
in
chunks
and
the
default
chunk
size
is
three
or
five
hundred.
I
can't
remember
off
top
of
my
head.
A
The
thing
is,
at
the
end
of
the
day,
will
try
to
put
all
of
the
pots
into
a
single
list
resource
the
the
unfortunate
list
resource
that
we
just
talked
about,
which,
with
with
big
numbers
of
pots,
can
be
both
monomer
memory
and
cpu
consuming
the
proposal
from
the
author
was
that
he
would
want
to
see
cube
cuddle
get
parts
spit
the
data
as
it
is
being
read
from
the
cluster.
A
That
would
not
necessarily
be
possible
because
you
need
to
create
the
entire
resource,
as
is
to
be
able
to
spit
it
out.
If
we
would
go
with
the
approach
of
returning
flat
list
or
the
the
default
young
list
json
list,
we
would
be
able
to
write
the
resources,
as
we
see
them
coming
from
the
server.
A
So
we
we
would
be
able
to
go
down
with
the
memory
usage
during
such
an
extensive
commands.
Obviously
one
can
argue
that
doing
a
get
pause
across
all
name
spaces.
If
you
know
that
there's
a
couple
or
more
thousands
of
resources
being
returned
is
is
a
little
bit
excessive,
but
still
it
it's
something
that
we
should
consider,
because
I'm
pretty
sure
that
it's
a
it's
perfectly
valid
use
case
for
for
some
people
to
to
expect
this
to
work
in
a
reasonable
manner.
H
A
Yeah
I'm
guessing
this.
This
also
includes
the
fact
that
the
output
we're
getting
from
the
server
is
json.
So
we
do
oh,
no
he's
requesting
json,
but
I'm
pretty
sure
that
we
do
some
kind
of
a
conversions
on
the
fly
to
be
able
to.
E
Get
70
seems
like
a
lot
like
what,
like
I
do,
know
like
we
store
both
the
live
object.
I
think
I
guess
it
like
the
was
it
the
resource
builder
I
know,
may
store
multiple
objects
of
versions
of
the
object
and
that
sort
of
stuff.
E
E
A
Yeah
go
ahead,
that's
all
I
mean
on
one
hand
the
serialization
is
a
potential
issue,
but
then,
even
if
we
fix
the
serialization
part
with
big
with
big
size
request,
this
still
has
the
problem
of
combining
all
the
resources
into
a
single
list
and
only
then
printing
that,
instead
of
being
capable
of
returning
those
as
we
go.
E
Yeah,
I
agree,
I
think,
there's
probably
both
need
like
70
time
blow
up
seems
like
it's
enough
that
it
warrants
a
little
bit
looking
into,
but
but
you're
right
like
it
would
be
much
more
elegant
to
stream
the
resources
as
we
read
them,
that'll
mean
like.
If
there's
anything
we
do.
That
requires
like
sorting
or
filtering
or
these
sort
of
stuff
that
won't
work,
but.
A
Yeah
yeah,
I
mean
the
moment
if
you
care
and
he
he
was
basically
getting
those
resources
into
a
json,
so
I'm
not
100
sure
how
much
sorting
we
actually
support
with
json
or
yum
output.
I'm
not
I'm
not
entirely
sure
if
that
works,
because
the
other
sorting
might
be
implemented
only
for
for
the
table
for
the
table
to
output,
but
yeah,
especially
that
the
cubecard
will
get
retrieves
their
resources
in
in
chunks.
So
it
seems
natural
to
also
write
them
in
chunks.
The
way
we
read
them
do
we
have
time
like.
E
Is
is
someone
gonna
actually
take
this
issue
on,
like
we
could
put
a
new
contributor
label
on
it
and
record
our
discussion?
Kind
of
the
steps
there
but
like
is
like
mache
sean.
Is
this
like
enough
of
a
priority
that
you
see
this
being
staffed
in
the
next
three
months?.
A
I
I
don't
have
the
resources
to
stuff
it
personally,
I'm
more
than
happy
to
help
anyone
that
is
interested
in
working
on
this
topic
go
through
the
entire
process
of
submitting
and
reviewing
those
whenever
whenever
one
is
interested
in
doing
so,.
G
So,
as
a
starter
project,
it'd
really
be
great
for
somebody
to
write
a
unit
test
that
reproduced
this
and
that
would
establish
a
regression
coverage
so
that
we
didn't
get
any
kind
of.
So
we
understood
what
was
happening
with
the
memory
usage
here
so
like
even
without
even
understanding
the
internals.
You
could
write
a
a
unit
test
to
reproduce
this
work
and
that
could
be
appropriate
for
somebody
sort
of
getting
started
in
the
code
base.
E
Let's
put
a
you
know:
new
contributor
label
on
this
maybe
record
the
state
and
suggest
to
the
author
that
if
they're
interested
in
it
seems
like
they've
done
quite
a
bit
of
digging
into
this
and
so
we're
interested
in
working
with
them
to
get
it
fixed.
If
they
want
to,
you
know,
take
ownership
of
it.
J
J
A
There
are
options
I
would
if
I
would
be
doing
something
like
that.
I
would
probably
think
about
doing
this
manually
with
clango,
if,
if
you're
doing
that,
I
don't
know
for
backup
or
for
exporting
and
working
with
the
data,
because
I'm
assuming
that
he's
getting
all
to
work
with
the
data
further.
H
Sorry
to
throw
another
issue
in
here,
but
we
I
just
linked
one
in
chat
yeah,
I
put
it
below
there
too.
So
this
is
an
old
one.
So
we
we
don't
actually
stream
the
results.
We
just
get
them
in
chunks.
So
this
these
are
all
very,
very
closely.
A
A
A
A
The
work
is
reasonably
simple.
The
timing
is
a
little
bit
more
demanding,
because
if
we
will
be
rolling
this
out
as
a
default,
it
will
require
at
least
three
four
releases
to
get
it
done.
A
A
I
would
still
document
this
because
even
if
we
would
roll
the
change,
like
I
said,
the
rolling
the
change
will
require
some
time
because
it
is
changing
the
defaults,
and
that
requires
time.
So
we
have
to
document
this
properly.
A
Until
we
roll
the
change
so
both
documentation,
because
that's
a
simple
and
something
that
we
can
get
right
away
and
probably
include
it
in
120,
docs
and
then
starting
from
121,
we
will
work
on.
If
we
have
someone
interested,
we
will
work
on
a
solution.
H
H
A
And
I'll
try
to
write
down
the
steps
either
today,
or
rather
tomorrow,
in
one
of
the
the
other
issues,
oh
we'll,
probably
close
in
one
of
them.
I
don't
know
I'll
leave
that
one
to
you,
eddie
and
I'll
leave
the
necessary
steps.
What
what
is
the
decision
that
I
will
be
that
we've
made
today
how
to
proceed
further
with
this
one.
A
G
G
G
The
only
difference
between
these
two
branches
is
the
default
value
of
a
command
line
flag,
so,
unfortunately,
early
adopters
found
bugs
in
3.9
which
is
sort
of
working
as
intended.
That's
why
we're
going
to
the
effort
of
maintaining
these
two
different
branches?
G
G
A
I
haven't
seen
the
date
for
121,
yet
I'm
not
sure
if
anything
was
up,
but
I
can
probably
check
it.
A
I
guess
I
have
a
question
jeff
about
the
the
bullying
flag,
that
you
mentioned,
that
the
3.9
has
a
change
versus
3.8
because
of
the
mesh
app.
That's
the
slack
exists
in
the
current
version
of
customize
that
we
have
in
cube
cuddle.
I'm
asking
whether,
when
we.
A
J
Yeah
so
I'll
keep
this
short
there's
a
few
quick
updates
and
it's
been
a
while,
since
I
gave
any
status
updates,
we
got
a
request
from
a
couple
of
users
for
a
linux
arm
build.
So
that's
a
big
enough
thing
now.
So
that's
as
of
a
couple
releases
back
we're
now
distributing
four
four
platforms,
including
linux
arm.
Now
the
second
is,
we
shall
release
9.3.0.
You
can
check
out
the
release,
notes,
there's
actually
quite
a
few
sort
of
minor,
but
but
I
think
pretty
pretty
important.
Ui
enhancements.
J
Now
you
can
very
quickly
switch
between
clusters
and
name
spaces.
Just
had
a
click
with
this
thing,
sort
of
a
single
click.
Previously
there
was
a
couple
of
steps
you
had
to
go
through
to
jump
through
to
change
those
contexts,
so
those
are
all
in
the
release
notes.
I
can
point
you
to
behind
the
scenes.
J
We've
had
some
conversations
with
mache
to
sort
of
finalize
the
repo
migration,
so
I
just
wanted
to
get
out
there
in
case
there's
any
last
words
about
you
know
what
we
should
do
or
how
we
should
do
it
much
and
I've
been
I've
been
working
behind
the
scenes
to
make
that
happen.
So
the
hope
is
that
we
can
go
from
the
ibm
hosted
reboot
to
having
something
underneath
the
I
guess.
J
It's
the
cube
enhancements,
karate
six
kubernetes
so
anyway,
if
you,
if
you
want
to
participate
or
guide
or
have
any
feedback
on
that,
let
me
know
if
you
have
any
objections.
Let
me
let
us
know
otherwise
we'll
proceed
with
that.
A
Okay,
cool,
so
yeah,
thanks
great
update.
Nick
thanks
is
anyone
else
that
I
want
to
share
some
updates
or
bring
some
topic
to
others.
Attention.