►
From YouTube: 20200116 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Sure
the
agenda
okay,
so
we
have
a
couple
things
to
discuss
today:
I'm
actually
going
to
move
this
up
before
I'll
move
the
sub-project
readouts
till
the
end
and
we'll
do
the
other
topics.
First,
all
right.
First
thing:
if
anybody
wants
to
comment
on
is
Tim
send
out
to
the
mailing
list.
Yesterday
a
discussion
around
the
kept
process
suggestions
we
some
of
you
may
recall
a
few
months
ago.
A
I'm
not
I
want
to
make
sure
we
make
productive
use
of
the
time,
but
the
basic
suggestions
were
changed
to
use
directory
structure
for
the
state
so
that
it's
really
easy
to
tell
what's
already
marked
implementable,
what's
provisional
and
therefore
needs
more
review
and
what's
already
right
now,
a
lot
of
things
never
even
get
updated
to
being
done
to
be
invented.
State
change
the
names
a
little
bit
in
order
to
make
it
I.
A
There's
a
an
ethic
of
recommendation
of
merge
and
iterate
on
the
caps
of
what
that
sometimes
does
is
make
disagreements
get
lost
as
opposed
to
you
know
it's
difficult
to
comment
on
a
cap,
that's
already
merged,
and
so
this
was
a
suggestion
to
come
up
with
a
way
some
tooling
around
annotating
the
cap.
In
order
to
say
this
section
is
still
not
resolved,
it's
still
under
discussion,
then
we
can
merge.
Although
what
is
the
advantage
of
that
Tim
you're
thinking
beyond
just.
B
A
C
Practically
you
will
break
github
if
you
accumulate
more
than
a
few
hundred
comments
and
so
like
yeah,
there
are
sections
that
were
resolved,
wrapping
up
those
taking
sort
of
the
unresolved
questions
and
wasting
those
to
the
next
iteration,
all
the
normal
tricks
about,
like
re-reviewing
PRS,
that
change
over
time.
Don't
work
with
caps,
because
it's
a
single
file
will
be
like
I
already
viewed
these
hundred
files
in
these
three
files,
change
and
I
need
to
look
at
them
again.
C
B
So
so,
there's
like
breaking
into
two
problems:
there's
all
the
existing
caps,
which
we'd
have
to
go
sort
of
back
populate,
which
is
a
pain
in
the
butt
but
tractable
at
least
bounded
I
acknowledge
that
if
caps
move
within
the
repo
that
makes
for
a
very
tedious
inability
to
link
to
them,
one
answer
I
guess
could
be
to
use
a
short
URL
and
so
tell
people
you
have
to
update.
You
send
me
to
PRS
to
move
your
cap.
One
actually
moves
the
file.
The
second
one
changes
the
URL
redirector
so
kept
skates.
B
C
B
Easy
totally,
although
I
feel,
like
I'm
I,
know
all
the
redirect
errs
and
yet
I'm
guilty
of
often
just
kind
of
releasing
the
URL
from
my
url
bar,
which
of
course
doesn't
show
in
the
redirector
by
design
I
mean
the
redirector
could
turn
into
a
proxy
so
that
the
URL
remained.
But
then
we
deal
with
SSL
and
all
sorts
of
stuff.
So
I'm
not
sure
we
really
want
it
and
I,
and
we
put
like
are
for
little
nginx
server
in
the
data
path
for
everything
which
I'm
not
sure.
We
want
to
do
right.
C
Just
to
the
redirect
thing
we
I
think
we
could
figure
out
the
thing
that
I
would
be
a
little
more
hesitant
about
is
losing
history.
So
github
is
really
bad
about
showing
you
history.
When
a
file
gets
moved,
you
can
do
it,
but
it's
not
it's
not
the
worst,
but
it's
really
tedious
to
be
like.
When
did
this
happen?
It's
like
move
to
done.
There's
no
history
on
the
five.
Yes,
so.
E
Go
ahead,
please
everything
kherington
yeah,
so
this
is
Derek
just
trying
to
capture
like
maybe
some
of
the
spirit
of
the
conversation
we
had
for
those
who
weren't
there
like
I,
think
maybe
it's
worth
asking
like.
Why
do
we
link
to
other
cups
from
new
cups
and
I?
Think
if
I
remember
a
comment
you
had
had,
it
was
basically
like
you
know:
caps
aren't
like
documentation
like
the
point
was.
This
is
I'm
enhancing
something
that
had
already
existed
so
like.
Maybe,
if
we
were
reflecting
ask
well,
why
did
you
link
to
previous
caps
like?
A
B
B
Mean
I
think
there
was.
There
were
a
couple
things
there
was
like
should
I
at
some
point
in
the
future,
go
back
and
edit
a
kemp
that
has
already
been
implemented
like
I.
Don't
know
that
we
I
don't
know
that
it's
worth
the
energy
to
try
to
keep
caps
as
up-to-date
design
documents,
they're
more
like
point
in
time,
which
makes
them
less
useful
as
documentation
but
Daniels
over
you're
scowling
at
me,
I
think
my
group
has.
B
H
Viscous-
and
that
is
that
is
one
view.
I
do
think
there
is
an
argument
that
says:
maybe
we
don't
want
to
be
like
all
those
law
books
where
they
like
give
you
a
DIF
to
apply
to
the
previous
law
that
they
passed
and
years
ago.
Let's
approach
the
mouth,
just
I
mean
it's
a
thought:
I,
don't
know
that
I
I
have
a
super
strong
opinion,
I,
don't.
B
But
I
do
find
myself
linking
to
caps
when
I'm
talking
to
people
and
they
say
just
this
morning.
Somebody
said
well:
what
are
the
numbers
we
have
around
services
and
size
and
number
of
endpoints
and
a
single
service
I
said:
go
read
this
cap
because
we
work
out
the
numbers
and
the
data
in
there
right.
Well,
I
did
I,
went
I,
have
a
URL
and
I
pasted.
B
It
actually
I
think
in
this
case,
I
just
said:
go
find
this
cap
because
I
was
on
mobile,
but
but
there
are
thread,
presentations
and
they're
all
over
the
place.
That's
right
also
having
having
a
URL
redirector
seems
like
a
reasonable
thing
to
do.
I
like
Jordans
idea
of
having
it
sort
of
auto
screen.
That's
a
more
complicated
thing
than
you
know.
It's
it's
something
more
than
nothing
more
than
a
config
file,
but
it
seems
tractable.
B
The
question
I
want
to
ask
I
guess
against
my
own
design.
Is
that
takes
one
piece
of
metadata
and
moves
it
up
into
a
higher
order
thing
so
now
we'll
have
to
write.
We
have
C
and
state
right,
but
is
there
gonna
be
a
third
one?
Is
there
gonna
be
a
fourth
one
where
we
want
to
move
more
and
more
metadata
into
that
structure?
And
if
so,
maybe
we
should
revisit
what
Caleb
had
started
with
Kepco
and
actually
just
turn
this
into
a
tool
that
runs
over
a
local
repo
I.
F
Don't
know
if
someone
else
has
said
this
already
since
I'm
late,
sorry,
but
the
thing
I
didn't
get
about
the
proposal
is
like
if
you
have
to
write
a
tool
anyway,
why
not
make
the
tool
tell
you
the
things
you
want
to
know
with
the
caps
in
their
current
location,
but
like
I,
don't
understand
how
it
adding
the
directory
structure
helps
if
you're
gonna
have
to
write
a
tool
anyway.
Okay,.
D
B
Started
writing
this
tool
right
I
think
he
had
grander
vision
for
it
than
just
a
query
tool,
but
just
a
query
tool
might
be
useful.
The
workflow
could
be
clone,
the
repo
run,
this
tool
against
it
and
it
will
generate
a
sort
of
index,
and
then
you
can
query
through
the
tool.
Show
me
all
the
caps
that
are
provisional
and
then
it
would
give
you
the
names
and
only.
A
D
B
Honestly,
if
somebody
else
feels
like
this
speaks
to
them,
I'd
be
happy
to
work
with
somebody
to
prototype
the
tool
or
take
what
Kailen
been
written.
If
we
can
get
X
to
it,
I'm
pretty
sure
we
can
and
and
run
with
it
and
sort
of
prototype
a
couple
different
approaches
to
this,
because
I'm
keenly
interested
in
keeping
up
on
caps,
yeah
and
I'm
too
hard.
It's
simply
too
hard
to
do.
Yeah.
A
A
I
I
So
if
I
would
just
quickly
browse
through
what
is
currently
in
kubernetes
so
incubators
we
have
like
we
are
using
key
lock,
which
is
like
some
enhanced
version
of
like
pretty
basic
G
lock.
That
was
based
on
C++
idea
of
writing
clocks.
So
it
has
like
multiple
problems
like
it's
mainly
used
for
easy
debugging,
but
it
cannot
really
do
much
well
as
a
if
you
think
about
as
a
cluster
administrator
of
multiple
faster
about
making
any
sense
of
those
locks.
I
If
you
don't
know
exact,
if
you
don't
previously
seen
them
in
code,
it's
really
hard
to
interpret
and
understand.
We
are
also
it's
not
very
useful
if
you
think
about
other
pillars
of
observability
like
tracing
like
metrics,
so
basically
being
able
to
match
some
events
or
sorry.
Events
is
about
working
code
for
it
some
metrics
or
traces
to
with
referencing
some
object,
so
I
mean
metrics
could,
in
Cabana
tests
the
pointing
into
matching
them
with
okay,
sir,
so
this
makes
it
so.
I
Like
more
broad
or
having
a
better
understanding,
what
is
really
happening
between
between
cosmic
being
able
to
join
and
different
clocks
without
yeah
without
the
data,
so
what
we
are
proposing
is
we
are
proposing
to
introduce
structured,
client
interface
for
logging,
and
this
client
would
be
based
on
work
previously
done
by
Tim
and
sauly
sauly
Ross,
and
they
expand
on
log
earth.
We
are
we
we
are
picking
or
like.
Our
idea
is
that
we
want
to
focus
more
on
the
interface,
not
on
direct
implementation.
I
I
Easy
to
predict
metadata,
but
also
can
build
more
advanced
to
to
to
look
at
this
data
and
then
join,
for
example,
locks
about
about
being
able
to
join
locks
about
from
controller
manager
from
API
server
or
any
black
and
other
and
from
cubelet,
and
see
one
view,
and
he
see
one
history,
one
consistent
stream
of
locks
that
gives
our
better
history
of
what
exact
happened.
What
events
happened
in
info
cluster,
so
more
holistic,
more
holistic
view,
so
main
may
really
important.
Part
of
this
change
is
migration,
so
we
are
here
proposing
pretty.
I
We
want
to
propose
a
pretty
detailed
plan
that
make
we
want
to
make
sure
that
we
will.
You
will
get
the
completeness
and
ensure
that
we
don't
get
stuck
in
middle,
so
we
for
that.
We
we
decided
that
we
wanted
to.
Firstly,
we
need
to
be.
There
were
some
comments
that
we
should
to
and
we
shouldn't
change
the
default
behavior
of
logging
and
we
should
wait
for
so
until
GA.
We
want
to
ensure
that
all
the
logs
will
preserve
their
previous
format.
I
So,
for
that
we
are
proposing
to,
we
are
proposing
to
use
and
double
like
double
cow
or
had
two
separate
api's.
The
one
still
use
it
with
will
still
use
key
lock
and
additionally,
we
would
as
separate
calls
to
that
and
the
new
login
interface
and
we
would
introduce.
We
would
write
tolling
that
would
ensure
we
will
try
to
do
most
of
the
work
on
transferring
Decalogue
format
into
the
new
new
logger.log
err
format
and
ensure
tolling
that
this
we
don't
did
we
don't
get
any
regressions
back
so
ensuring
that
no
new
Hey.
I
I
For
this,
this
is
for
the
migration
and
like
one
last
thing
that
we
wanted
to
ensure
is
that
there
is
a
separate
effort
in
cig
instrumentation
with
introducing
tracing.
So
we
wanted
to
make
sure
that,
like
we
knew
that
logging
is
pretty
or
easiest
or
much
easier
to
start
and
introduce
effort
than
tracing
about
like
the
the
metadata
that
we
want
to
buy,
because
tracing
would
be
also
needs
a
consistent
metadata
and
it's
a
full
propagation
of
context
through
the
call
stack.
So
as
part
of
that,
we
we've
decided
that
we'll
be
introducing
we.
I
A
Thank
you,
yeah
I
think
that
one
question
would
be
I
guess.
Part
of
this
is
to
bring
awareness
to
everybody
and
make
sure
that
everybody's
aware
this
is
going
on.
That's
why
we're
in
the
meeting
here
with
it?
If
there
are
any
direct
questions,
I
think
we
should.
We
should
do
that,
but
we
do
have
limited
time.
So
one
I
think
two
things
that
Tim
I
think
said
today.
Try
with
this
is
whether
we
agree
with
the
end
result
and
then
on
the
process.
Can
you
Mary?
A
I
B
Sorry,
I
guess
I'm
gonna
have
to
step
out
in
a
couple
minutes:
I'm
a
fan
of
the
idea.
I
think
the
end
goal
is
a
notable
goal
and
I
think
this
is
the
sort
of
change
that
has
long-term
benefits
that
we
can't
quite
predict
or
quantify.
That
said,
it's
an
enormous
amount
of
work
and
I
think
that
if
we
turn
this
into
an
unfunded
mandate
on
each
of
the
SIG's,
it
will
utterly
fail.
B
I
think
the
the
only
way
to
really
make
this
happen
is
to
say
we
are
going
to
take
the
responsibility
on
for
this,
and
all
we're
asking
from
the
SIG's
would
be.
Please
review
my
changes
to
your
code
to
make
sure
that
they
make
sense.
Even
that
is
asking
a
lot
right,
but
since
somebody
has
to
review
them,
please
just
look
at
these
long
changes.
B
We
are
going
to
take
on
the
effort
of
making
sure
that
the
change
sets
are
small
and
digestible
and
obvious
we'll
do
the
tooling,
which
is
incredibly
hard,
but
we'll
do
the
tooling
to
do
the
auto
conversion,
we'll
pick
it
up
from
there
and
the
reason
we
have
the
incentive
to
make
the
tooling
good
is
because
we're
the
ones
who
have
to
pick
up
whatever
the
tooling
leaves
us
and
finish
it.
And
so
that's
that's.
My
sense
on
and
I
ran
through
the
cap
this
morning
and
I
left
a
few
comments.
B
H
Second,
that
right,
I'm,
looking
at
it
just
a
quick
grep
I,
have
thousands
of
logs
and
I
see
value
here.
I
have
I
have
a
lot
of
questions
on,
like
specifically
outfits
in
a
lot
of
these
use
cases
to
be
useful
serializable,
but
yeah
I
don't
have
the
the
personal
incentive
to
try
to
actually
go
through
the
thousands
you.
F
Think
it's
we're
talking
years
of
people's
lives
to
implement
this,
like,
hopefully,
spread
out
among
many
people,
but
that
doesn't
mean
it's
actually
less
time
that
would,
after
we
would
have
to
collectively.
Invest
it
to
make
this
happen.
I
just
don't
see
the
benefits
that's
being
commensurate
with
that
cost.
A
A
F
F
B
So
huge
I
agree,
there's
probably
an
80/20
thing
here,
like
so
many
things
in
our
space
but
I.
Actually
this
is
one
of
the
few
where
I
think
the
finishing
it
may
be
worthwhile,
it's
hard
to
quantify
and
put
my
finger
on
exactly
why,
but
I
think
it
might
actually
be
worthwhile.
I
do
think
we
have
a
community
of
people
who
are
chomping
at
the
bit
to
take
on
such
changes.
If
we
just
show
them
the
template
and
they
can
carry
forward
and
yeah,
the
reviews
are
gonna,
be
a
little
bit
tedious
but
they're.
C
I,
just
I
mean
other
places
where
we've
set
up
a
template
and
sort
of
a
really
approachable,
pull
requests
style
like
static
checks
and
linting,
and
shell
script
checking
I
would
actually
say
those
have
been
pretty
difficult
to
manage
and
with
questionable
benefit
like
they
tend
to
change.
A
lot
of
files
and
those
reviews
are
kind
of
the
worst
kinds
of
reviews
where
they're
a
really
boring
non
interesting
review.
And
then
the
one
critical
line
is
like
buried
75
files
in
and
it.
B
B
The
automation
is
difficult,
like
having
tried
to
write
this
automation
for
client
go.
It
is
difficult.
I
found
it
difficult
to
get
beyond
about
50%
efficacy,
which
means
that
there's
a
lot
that
you
put
back
in
humans
hands
if
you
can
do
better
mark
I
like
I,
welcome
you
to
do
better.
Please
I
certainly
did
not
engage
with
it
as
hard
as
I
could
have,
but
I
found
it
very.
C
F
I'd
like
to
add
one
one
more
thing
to
my
my
statement,
which
is
I,
am
and
have
been
at
various
points.
Over
the
last
many
years,
I've
been
a
very
heavy
user
of
logs,
at
least
of
the
API
server
logs
and
like
it's
a
little
annoying
to
search
soon.
But
it's
it's
it's
doable.
It's
not
the
end
of
the
world.
Well,.
A
G
G
A
J
J
It
takes
about
10
minutes
to
complete
I,
especially
I'm,
specifically
talking
to
this
crowd,
because
the
longtime
folks
in
the
community
definitely
use
github
and
our
tooling
in
a
different
way
than
our
new
contributors
do
so,
and
decisions
from
these
surveys
kind
of
impact.
How
we
prioritize
like
what
you
know
what
github
automation
do
we
want
to
get
done
in
the
next
year
or
how
are
we
going
to
change
things
to
be
more
efficient?
So
please,
if
you
haven't
already
take
the
ten
minutes,
fill
out
our
annual
survey.
J
D
It's
gonna
be
quick,
the
last
meeting
code
augmentation,
nobody
showed
up
so
next.
One
please
show
up.
It
was
just
me
a
day.
I
couldn't
talk
to
myself
after
five
minutes
other
than
that
there
was
some
stuff
that
I
was
able
to
work
with
Jordan
and
other
folks,
mainly
around
updating
some
of
the
dependencies.
D
So
what
for
the
most
bit,
I
was
able
to
get
reviews
from
a
lot
of
people
this
time.
So
the
what
we
ended
up
doing
was
created
a
big
PR
with
all
the
things
that
ended
up,
changing
and
then
broke
it
into
smaller
PRS
and
get
them
in
first.
Wild
review
was
happening
on
the
bigger
PR
as
well.
So
the
what,
where
we
ended
up
this
morning
was
we,
we
updated
a
whole
bunch
of
things
over
the
last
few
days,
but
then,
where
we
ended
up
this
morning
was
maybe
Jordan
wants
to
speak
to
it.
D
But
I'll
give
you
the
highlight
the
highlight
was
see
advisor
seems
to
be
dragging
in
a
lot
of
dependencies,
that
we
don't
use
Kafka
and
ZFS
and
whatnot.
So
we
should
try
to
do
better
in
see
advisor
just
like
what
we
did
last
time
last
time.
What
we
did
was
we
figure
out
a
way
to
reduce
the
number
of
run
times
that
we
pull
in
by
fixing
the
imports
and
registering
the
things,
and
things
like
that.
So
we
should
do
the
same
for
some
of
the
storage
stuff
that
is
therein
see
advisor.
D
D
This
and
this
exercise
is
like
I'm
trying
to
do
it
like
really
early
in
the
cycle.
First,
you
know
first
chance
we
get
so
we
we
know
what
we
are
up
against
and
since
we
are
doing
since
we
did
this
analysis
with
C
advisor
and
a
bunch
of
other
things,
we
know
that
you
know
you
see
advisor.
We
usually
update
towards
really
just
before
we
make
we
you
know,
do
the
code,
freeze
and
stuff
like
that,
so
doing
it
earlier.
I
think
helped
this
time.
A
C
So
so
good
getting
assembler
modules.
A
lot
of
that
involves
our
dependencies
actually
making
use
of
their
modules
in
declaring
a
good
mod
file,
and
that
actually
makes
the
problem
worse
before
it
gets
better.
It
doesn't
actually
make
it
worse,
but
it
makes
it
more
visible.
It
makes
the
problem
we
already
have
more
visible
because
it
takes
all
of
their
transitive
dependencies
and
makes
them
explicit
and
makes
them
participate
in
our
dependency
tree.
And
so
that's
what
happened
with
see
advisor
between
the
last
version.
C
Let's
see
advisory
on
the
current
one,
they
added
a
go
mod
file,
which
is
great
but
makes
visible
the
problem
we
already
had,
which
was
that
we
had
transitive
dependencies
on
tons
of
storage
drivers
and
crazy
database
things,
and
so
it's
fine
to
have
visibility
to
that.
But
that
means
to
move
forward.
We
actually
need
to
resolve
that
problem,
so
this
is
it's
painful,
but
this
is
on
the
on
the
way
to
getting
to
modules.
D
Right
as
a
result
of
this
analysis,
what
we've
been
doing
is
pinging
people
in
other
projects
like
container
Lee
split,
some
of
their
code
into
like
continuity,
console
and
continuity,
C
groups,
or
something
like
that
so
and
those
didn't
have
any
versioning,
cember
stuff
or
even
go
modules.
So
we
ping
them
and
told
the
requested
them
to
add
support
for
go
modules
and
add
support
for
sambhar
so
and
the
HCS
HCS
shim.
D
Folks,
we've
had
like
an
ongoing
discussion
with
them
for
more
than
two
three
releases
about
using
a
specific
tag
and
they
are
not
willing
to
give
us
a
tag
when
we
need
one.
So
they
ended
up
creating
a
tag
for
something
else,
and
then
we
said
ok,
we
are
gonna
use
that
for
now
and
then
there
was
a
discussion
about
oh,
but
we
didn't
certify
Kuban.
It
is
for
that
tag
that
we
cut
and
then
I
was
like.
If
you
don't
cut
a
time,
how
will
we
test
with
cuban?
D
It
is
so
if
there
is
some
of
this
stuff
happening
behind
the
scenes
as
well.
So
it's
more
like
making
sure
that
we
are
talking
to
people
and
getting
them
to
add
support,
so
we
can
then
reuse
the
other
example
that
comes
to
mind
is
you
know
where
this
will
help
us
like
when
thing
with
Runcie
ends
up
getting
merged
the
one
that
Jordan
was
working
on,
then
the
run
see
we'll
have
to
hit
continuity.
Francie
will
have
to
hit
the
docker
docker
and
then
be
allowed
to
pick
it
up.
D
A
D
B
In
the
next
call,
I
had
one
little
problem
and
I'm
meaning
to
come
to
that
working
group,
but
I
have
been
unable
to
make
the
time
I
found
at
least
one
PR
that
wanted
to
move
something
off
into
utils,
where,
when
I
did
a
deeper
review
of
it,
like
I
found
really
egregious
bugs
in
it
and
I
just
got
a
bandit
and
like
I
feel
like
it
was
being
moved
for
a
reason,
but
the
bugs
were
so
big.
Basically,
the
module
kind
of
had
to
be
rewritten.
Those
bugs
still
exist
right.
B
B
Yeah
well
I'm,
watching
fed
about
age
out
this
old
PR
that
was
significant
and
like
it
actually
is
pretty
important
module
and
it
has
some
really
nasty
bugs
in
it
and
I
filed
a
separate
issue
against
KK
with
you
know.
There
are
really
nasty
bug
in
this
module,
but
it's
not
gonna
get
moved
to
K
utils
until
it
gets
debugged
and
that
I
mean
I'm,
not
sure
that
we
do
that
level
of
exploration
on
every
PR,
but
damn
we
should,
because
it
is
really
insightful
right.
D
I
mean
short
of
doing
something
in
google
Summer
of
Code,
or
something
like
that
where
we
can
get
focused
people
working
on
on
things,
I
I,
don't
know,
somebody
has
to
show
up
with
the
willingness
to
work
and
we
have
to
go
find
people
who
are
interested
in
this
kind
of
you
know
opportunity
to
learn.
You
know,
because
this
is
really
good
stuff
I
mean.
A
B
B
B
D
B
I'm
and
I'm
happy
to
to
guide
and
weigh
in
on
it.
I
tried
to
spend
a
lot
of
time
on
that
review
so
that
it
was
that
so
that
somebody
could
take
it.
What
it
came
down
to,
though,
is
like
this
is
a
significant
module.
I'm
not
going
to
name
it
shame,
but
it
was
a
significant
important
module
that
needs
a
different
API.
This
man
takes
that
the
API
offers
aren't
reasonable
and
so
I
and
I.
It's
not
even
any.
B
This
one
seemed
fun:
I
was
surprised
that
it
died
like
it
seemed
like
a
fun
hard
problem
to
tackle
that
infrastructure.
Dorks
wouldn't
really
enjoy
doing,
and
understanding
that
the
person
who
sent
it
is
also
a
business
person
who's,
not
a
drive-by
contributor,
so
I'm
sure
they
have
other
things
to
deal
with
and
I
didn't
have
anybody
that
I
could
throw
the
bug
ad.
It
wasn't
on
fire,
and
so
I.
Just
like
I
said.
Ok,.
A
We
have
a
sort
of
version
2
of
our
questionnaire,
so
we've
had
some
refinements
or
that
we're
starting
to
see
caps,
people
filling
them
out
I
have
to
check
and
see
if
we
added
it
to
the
cap
template.
Yet
we
would
like
to
it's
still
a
kind
of
non-blocking
type
of
thing
we
want,
by
the
end
of
the
month,
to
send
out
any
sort
of
quick
survey.
A
That's
going
to
try
and
inform
the
the
the
questionnaire
and
make
sure
that
that
the
kinds
of
questions
were
asking
are
going
to
actually
help
alleviate
the
kinds
of
problems
people
are
having,
and
so
in
my
email,
I
asked
anybody
who's
interested
to.
Please
take
a
look
at
that.
There
were
some
sample
from
a
draft
questionnaire
put
together
by
some
folks
and
we're
gonna
try
and
focus
that
down
a
little
bit.
It
was
maybe
a
little
bit
too
broad,
but
you'll
see
that
in
the
comments
on
there
and.
A
Yeah
I
think
that's
pretty
much
all
I
need
to
say
on
that
right
now,
we're
we're
looking
at
the
capsule
and
also
I
guess
in
the
email
wish
sent
out
some
metrics,
because
we
want
to
make
sure
that
you
know
this
is
an
effective
process.
So
some
ideas
on
how
we
might
measure
that
is
interested.
Please
go
ahead
and
review,
but
that's
where
we
are
right
now
is
initial
version
of
the
questionnaire.
Is
there
we're
working
on
validating
that
and
making
sure
that
it's
effective?
K
Yeah,
this
is
just
a
minor
announcement.
We
did
find
some
interesting
stuff
with
regards
to
adding
new
field
to
Napier
we're
going
to
discuss
that
in
the
network,
since
it
was
started
by
Signet
work.
So
Jordan
and
I
had
just
been
deep
in
the
weeds
on
this
last
couple
days.
That'll
trigger
some
guideline
changes
to
API
review
for
adding
new
fields.
The.
In
brief,
we
added
a
field
that
depended
on
config,
and
so
you
could
configure
a
cluster
and
the
implicit
default
value
is
one
thing
and
you
can
configure
it
in
different
ways.
K
K
To
be
fair,
though,
looking
at
config
was
not
the
issue
it
was.
It
is
a
very,
very
hard
thing
to
reason
about
both
Jordan
and
I,
spotted
numerous
things
where
we
would
think
we
were
okay,
think
about
something.
Some
more
come
up
with
a
new
edge
case
which
helped
us
trigger
a
lot
of
questions
for,
like
we
probably
need
to
add
those
questions,
will
get
added
to
like
the
API
review
like
what,
when
you
add
this
field
and
a
server,
does
that,
like
you're
in
a
rolling
upgrade
between
a
a
masters,
you.
K
C
K
Note
one
more:
we
introduced
the
field
after
implicitly
introducing
the
field
without
explicitly
introducing
the
field.
So
like
the
order
was
we
supported
single
stack,
VP's
ipv6,
and
then
we
added
a
field
to
represent
it,
but
that
actually
made
that
even
more
complicated.
Because
then
you've
had
the
implicit
behavior
versus
explicit
your.