►
From YouTube: Kubernetes Sig Docs 20180814
Description
Meeting notes: https://docs.google.com/document/d/1Ds87eRiNZeXwRBEbFr6Z7ukjbTow5RQcNZLaSvWWQsE/
The Kubernetes special interest group for documentation (SIG Docs) meets weekly to discuss improving Kubernetes documentation. This video is the meeting for 14 August 2018.
https://github.com/kubernetes/website
A
A
A
B
Don't
see
Andrew
I'll
step
up
here
on
this
particular
PR
is
specifically
around
the
cubelet,
but
in
digging
around
about
it,
Michael
Towson
discovered
that
there
are
probably
similar
issues
with
all
of
the
generated
docks
that
are
which
we
cannot
guarantee
that
any
of
them
complete
people.
It's
simply
the
most
extreme
case,
so
that
that's
how
I'm
reading
a
PR
at
this
point
that
might
not
be
completely
accurate
and
we
this
is
something
that
needs
to
be
addressed
both
by
folks
working
in
the
KK
repository
and
by
signals.
B
B
So
if
we
can
put
the
word
out,
you
know
anyone
else
interested
if
there's
anybody
on
this
call
who'd
like
to
know
at
a
very
high
level.
The
issue
is
that
the
commands
that
you
know
there's
you
know:
there's
sale,
I,
help
man
pages
four
and
then
that's
what
we
get
that
generated.
That
content
is
what
we
get
for
the
generator
docks
on.
B
So
one
quick
workaround
is
to
pull
the
CLI
content
out
of
built
kubernetes,
but
we
ought
to
be
able
to
figure
out
a
way
to
get
it.
There's
Andrew
Andrew
can
correct
me
if
I'm
wrong
about
any
of
this,
including
his
availability
and
any
way
we
can
do
that.
But
that's
it
seems
that
we
ought
to
be
able
to
pull
the
content
out
without
having
to
build.
B
What
the
working
group
is
for
yeah
there's,
there's
more
conversation
in
the
PR
I,
don't
want
to
misrepresent
it.
We
still
need
more
information,
still
I
mean
Andrea
met
with
with
Michael
Towson,
and
so
he
might
have
some
more
information
here,
but
but
basically,
basically
either
either
we
build
kubernetes
and
pull
out
the
CLI
hilt
from
that
or
we
completely
redo
they
generate.
We
have
to
redo
that
generate
scripts
in
any
case,
but
the
level
at
which
we
do
it
is
Michael.
C
C
A
A
B
D
So
after
talking
with
the
release
team,
everyone
is
aware
that
eventually,
the
project
wants
to
move
in
the
direction
of
getting
away
from
the
docker
ship,
which
Oh
Tim
pepper,
sent
me
the
best
link
for
those
of
us
who
are
like
I
need
a
diagram.
Let
me
look
for
it
really
quickly.
It's
a
great
explanation
on
like
the
underlying
container
runtimes
cryo
I'm,
hoping
I
can
find
it
quickly
before
you
know,
I'm-
probably
not
going
to
waste
time
so,
but
basically
everybody
said
yes,
we
would
like
to.
We
would
like
to
move
in
this
direction.
D
Nobody's
aware
of
any
specific,
concerted
effort,
especially
a
part
of
this
release,
to
be
using
container
D
as
the
default
container.
Runtime
interface
for
kubernetes
I
know
that
his
name
Stephen
augustus
said
that
will
be
used,
that
Red
Hat
will
be
using
cryo
as
its
as
its
container
runtime
interface
and
yes,
so
that
is
that
that's
the
current
state
of
the
ecosystem
so
to
speak.
Essentially,
we
will
be
going
there.
We
don't
know
when,
probably
within
a
year,
I.
B
B
The
rumor
that
I
have
heard-
and
they
think
y'all
know
where
I
got
it
from-
is
that
there
is
in
fact
a
concerted
push
to
move
to
container
D
for
window.
Well,
however,
it
would
appear
that
there's
some
fairly
big
communication
gap
there.
So
perhaps
we
need
to
just
sort
of
put
this
on
a
collective
background
thread
and
I
will
see
whether
I
can
get
more
into.
D
B
I
asked
the
question
right.
You
know
I
started
hearing
I
assumed
there
was
more
conversation
going
on
at
that
release
level.
Then
clearly
there
is,
which
is
why
I
asked
music
and
why
I
stuck
it
mister
in
this
meetings
agenda,
but
it
does
sound
as
though
what
I
need
to
do
is
go
away
and
do
a
little
more
research
before
we
yeah.
We
have.
D
E
Gonna
say
I'm
not
sure
what
my
role
should
be
here,
but
I
guess
it
since
I
am
the
CNCs
person
and
container
be
it
is
a
CNC
I
project
I'd
be
happy
to
access
like
a
point
person
intermediary
for
you
go
and
tell
those
guys
no
kidding.
Oh
I
will
I
will
yeah
cuz
I
know
this
container
D
docks
are
pretty
thin
in
general,
so
I,
be
you
know
if
we
need
to
spear,
have
a
big
big
improvement
effort
on
that
end.
I
would
be
happy
to
be.
A
A
B
B
A
A
B
I,
all
that
I
need
to
say
is
in
the
agenda.
I
wanted
to
make
it
known
to
the
group
before
I,
submitted
a
whole
request
to
amend
the
style
guide,
because
we
do
have
issues
with
style
guide,
compliance
of
all
sorts.
So
before
we
add
things
to
it,
we
should
at
least
be
aware,
as
a
group
I
will
now
go
away
and
submit
a
pull
request
and
we
can
continue
the
conversation
their.
B
B
A
All
right,
Thank,
You
Jennifer
for
opening
a
pull
request
to
eliminate
patronizing
language
from
the
documentation
and
to
make
that
a
standard
appreciate
that,
let's
see
so
next
week,
is
write.
The
docs
Cincinnati
I
will
be
there.
Jared
will
be
there
Jennifer,
you
will
be
yes,
Andrew
will
be
encouraging.
F
A
G
A
C
What
I
was
thinking
we
can
use
like
the
while
we're
there
at
the
right?
The
docks
would
be
to
use
Tuesday
to
do
some
Singh
tax
planning.
So
maybe,
instead
of
a
regular
meeting
that
like
we're
doing
now,
we
could,
when
we're
doing
that
planning
we
could,
you
know,
have
it
maybe
online
and
if
people
want
to
hop
on
and
see
or
participate
and
that
sort
of
stuff
don't
be
good.
But
it's
the.
A
A
C
B
Was
trying
to
type
in
the
chat,
and
then
you
called
me
out
uh-huh
and
I'm
multitasking
journalist
today
and
I
guess:
I
I'm
willing
to
just
deal
with
this
sort
of
solo,
because
I
feel
like
I'm.
The
one
who
created
the
situation.
I
believe
that
there
is
some
expectation
that
folks
will
show
up
and
about
Cincinnati
with
an
interest
in
participating,
duck
Swenson,
no
project
to
work
on
and
most
projects
that
are
showing
up.
Aren't
there
to
deal
with
that.
B
We've
dealt
with
it
in
the
past,
so
I
thought
it
would
be
good
if
we
came
prepared
to
deal
with
it
again,
but
that
that
said
it
needs
to
be.
You
know
pretty
lightweight,
because
what
we
can
ask
people
to
do
is
going
to
depend
on
their
background,
the
comfort
level
and
familiarity
and
how
many
of
them
there
are
and
how
much
hand
holding
around
CLA
issues
we're
willing
to
do
so.
C
B
B
C
B
B
C
I
just
wanted
to
be
sort
of
a
continuation
of
you
know
when
we
have
the
sigdoc
summit
back
in
right.
The
docks
in
Portland
I
figured
since
we're
just
there.
You
know
all
together,
we
could.
We
might
as
well
make
use
of
the
time
and
I
personally
was
kind
of
like
docks
printed
out,
so
I'm
not
sure
that
I
would
participate.
A
This
forward,
it's
a
good
idea
and
I
think
it
makes
sense
to
do
it
again
in
December,
but
I
have
a
gay
been
doing
it
now.
I
think
also
makes
sense,
I
think
we're
all
we're
approaching
that
the
professional
time
of
year
or
we're
having
having
the
kind
of
clarity
that
a
good
planning
session
provides,
gives
us
the
clarity
that
we
need
for
setting
like
annual
goals,
helping
helping
our
employers
set
their
own
internal
priorities,
and
things
like
that.
So
yeah
that
totally
makes
sense
just.
C
Use
whatever
time
we
can
when
we
happen
to
be
together
and
then
it's
easier
just
do
as
incremental
II,
because
I
think
I
learned
from
our
elastic
docs
on
at
that.
We
just
don't
have
that
much
time
or
even
even
if
we
did
it'd
be
hard
to
just
do
this
for
like
two
days
straight,
so
we
might
as
well
just
do
it
in
little
incremental
first,
when
we
can,
if.
F
Okay,
sorry
I
think
it
might
be
good
to
go
back
over
the
results
of
the
last
planning
session
that
we
had
right
after
write,
the
docs
we
did
a
big
sig,
Docs
meeting
and
I
am
honestly
not
sure
how
to
map
what
we've
done
since
then
to
what
we
talked
about
before
I.
Don't
remember,
I'm,
not
saying
we
have
done
things
or
haven't
done
things.
F
A
A
A
This
came
up
recently
because
we
had
a
PR
open
from
a
member
of
the
sig,
a
sure,
repo
updating
their
list
of
reviewers
in
the
Chinese
Docs
repo
and
it's
sort
of
like
updating,
generated
Docs
downstream.
That's
not
really
the
place
to
do
it,
so
my
the
the
the
question
that
came
up
for
me
is:
what
should
we
be
doing
with
owners
files
and
owners,
sub,
aliases
and
specifically
owner
sub
aliases
owners
files?
I?
A
Think
it's
really
clear
to
just
rip
them
out
and
make
them
repo
specific
so
that
each
individual
localization
repo
has
its
own
set
of
reviewers
and
approvers
for
owners.
Sub
aliases
I'd
also
like
to
float
the
idea
of
having
that
be
an
empty
file
so
that,
when
an
internationalization
repo
starts
up
that
their
owners,
sub
aliases
file
is
empty.
A
The
reason
for
that
is
otherwise
prow
anytime
that
well
probably
the
bot
plug
in
a
blunderbuss.
The
plugin
will
use
that
list
to
assign
reviewers
to
PRS
in
languages
that
they
don't
understand
or
may
not
have
fluency
in,
and
that
seems
like
an
undesirable
state
of
affairs
to
ask
people
to
review
PRS
in
a
non
fluent
language,
and
that's
only
that's
a
scaling
problem
as
well
is
the
more
languages
you
have
the
more
sort
of
randomly
assigned
PRS
in
in
different
languages.
A
That
said,
Sega
sure
has
people
who
specifically
have
fluency
in
Chinese
and
would
like
to
have
reviewers
available
for
Chinese
content
in
that
repository,
I,
don't
I,
think
that's
fine
I
think
that
having
language-specific
reviewers
makes
sense.
I
would
ask
that
folks
who
make
downstream
contributions,
like
that,
also
be
mindful
of
any
upstream
changes
that
need
to
happen
as
a
result
of
that
kind
of
clarity,
but
I
guess
so.
A
Here's
my
proposal
that
we
provide
empty
owners,
sub
aliases
files
and
then
allow
SIG's
to
update
or
allow
things
to
add
language
specific
reviewers
as
they
please
does
that
sound,
reasonable,
strong
thumbs
up
from
Brad
and
Andrew
today,
Jennifer
anybody
see
any
reason
like
will.
This
all
come
crashing
down
while
having
well
having
language-specific
reviewers
D
index,
our
docs
on
Google.
A
C
Are
you
able
to
see
the
post-mortem
dock
yeah,
okay,
cool,
so
yeah?
Just
to
reiterate
this
is
supposed
to
be
a
no
blame
post-mortem.
So
we're
not
trying
to
you
know
single
anyone
out
or
anything
like
that.
The
goal
is
to
learn
from
our
mistakes.
Like
figure
out
what
assumptions
we
had
that
we're
incorrect
and
you
know
figure
out
how
to
improve
it
in
the
future
right.
So
this
stock
was
put
together
by
several
people,
including
Jared,
Tom,
Zack
and
Luke
and
myself.
C
C
We
that
enabled
us
to
have
version
dots,
so
we'd
have
these
like
subdomains
for
the
different
versions,
like
you
know,
be
1-8
that
sort
of
thing,
and
so
that
necessitated
us
to
find
a
way
to
make
sure
those
didn't
show
up
in
the
Google
search
results.
So
what
I
did
at
the
time
was
create
these
this
header
file
that
was
included
during
the
the
build
command
for
just
the
specific
you
know
subdomains,
but
then,
as
we
then
switched
from
Jekyll
to
Hugo
of
all
those.
C
We
started
using
the
magnified
tamil
file,
which
had
all
the
build
commands
and
stuff
which
was
good
and
that
it
made
the
whole
process
a
little
bit
more
transparent
and
you
could
see
the
control
mechanism,
and
it
was
also
to
ensure
that
all
this
all
the
future
versions
of
site
that
relied
on
Hugo
we're
using
the
proper
the
correct,
build,
commands
right.
So
then,
the
downside
of
that
is
that
I
needed
to
then
move
some
of
this
logic
into
the
nullified
Tamil
file
right.
C
So,
as
you
can
see,
what
I
did
was
I
put
the
the
command
copied,
the
no
index
header
into
the
default,
build
command
and
I
created
a
separate
context,
so
that
for
master
that
branch,
when
it
was
being
deployed
as
production,
that
it
would
not
use
that
that
bill
command.
He
would
just
use
the
regular
Hugo
commandments.
C
Get
that,
and
so
the
issue
then
was
that
through
two
separate
PRS
one
was
one
was
to
try
to
like
add
the
or
make
use
of
the
HTTP
to
server
push
technology,
and
then
the
other
one
was
I
think
upgrade
us
to
using
a
newer
version
of
Hugo
those
kind
of
stepped
on
that
file.
And
then
the
no
index
directive
got
pulled
into
the
header
for
all
four
for
the
master
branch
and
for
the
production
server
as
well.
C
So
that
is
why
we
started
getting
the
main
site
D
index
and,
as
you
can
see
from
the
graph,
it
started
to
drop
off
and
it
was
slow
and
I.
Think
one
of
the
other
issues
is,
we
didn't
have
any
monitoring
in
place
to
check
for
this
sort
of
thing.
So
I,
one
of
the
small
things
that
happen,
which
should
have
been
a
signal
but
I,
don't
think
I
understood
at
the
time
was
that
a
really
old
version
of
the
site,
which
was
like
the
1.4.
C
Like
you
know,
we
had
gotten
Kings
from
people
in
the
slack
sig
Docs
channel
of
being
like
well,
these
links
are
broken
and
stuff
and
when
I
dug
into
it,
it
was
because
they're
referencing
this
really
old
site
which
in
retrospect
only
bubbled
up
because
the
regular
index
you
know,
search
results,
started
dropping
off
in
those,
and
so
those
surfaced
right.
So
so
that
should
have
been
a
single
bit
home
didn't
realize
at
a
time.
C
So
after
the
you
know,
the
regular
site
results
started,
dropping
off
people
couldn't
find
it
anymore
and,
and
we
were
notified
through
both
you,
like
slack
and
also
like
several
issues,
both
in
the
KK
repo
and
the
K
website.
We
Poe,
so
we
were
able
to
take
corrective
measures
quickly.
But
since
you
know
the
Google
Search,
Indexing
kind
of
happens
in
on
it,
the
the
time
scale
for
the
re-indexing
stuff
is
like
on
the
order
of
weeks.
We
probably
won't
see
it
return
to
normal
for
another
couple
weeks,
so
you
can
see.
C
So
what
we
learned
were
was
that
the
page
headers
are
extremely
important
in
being
modified,
should
only
be
modified
when
necessary,
and
we
should
make
sure
we
vet
it
through
several
people.
We
need
checks
to
be
put
in
place
that
I
guess
notifies
sig
Docs
maintainer.
If
pages
have
been
de-indexed
and
prevent
pages
from
bearing
the
X,
the
the
no
index
header
in
in
production.
More
broadly,
we
need
new
features
added
to
the
site
to
have
more
comprehensive.
C
When
new
features
are
added
to
the
website,
there
should
be
more
comprehensive,
like
sort
of
risk
analysis
and,
like
you
know,
bedding
of
this
before
they're
merged.
Let's
see
knowledge
of
infrastructure
mechanisms.
How
they
should
behave
should
not
just
be
in
one
person
but
like
they
should
reside
in
many
people,
and
the
default
behavior
in
retrospect
should
probably
be
set
in
terms
of
the
failsafe
right.
C
So
if
something
goes
wrong
like
what
is
the
state
because,
specifically
like
I
bet,
we
would
rather
have
things
over
indexed
and
you
know
have
like
search
results
from
the
other
versions
of
the
site,
rather
than
having
nothing
indexed,
because
that's
probably
a
worse
state
of
affairs
right,
so
we
should
restructure
it.
So
if
something
did
fail,
it's
not
gonna
like
remove
all
of
the
you
know,
search
results
and
then
yeah.
The
critical
infrastructure
should
be
documented
and
known
to
all
sig
Docs
things.
C
I
went
well
our
once
the
issue
was
identified,
it
was
quickly
fixed
and
that
happened
within
a
few
hours
and
then
yeah.
The
issues
mechanism
worked
well
like
people
filed
issues
and
you
know
kind
of
reported
through
the
things
that
went
poorly,
even
though
there
were
comments
and
identify
comma
file.
So
you
know
identify
like
the
knowing
next
mechanism.
C
I,
don't
think
they
were
clearly
stated
enough,
so
that
people
understood
the
importance
of
what
those
lines
actually
did
and
then,
let's
see,
I
was
CCD
on
the
PRS
to
change
it,
but
I
did
not
get
a
chance
to
address
them
quickly
enough
or
actually
at
all.
So
then
they
were,
they
end
up
getting
merged
anyway.
C
Yeah.
The
change
is
that
that
kind
of
dismantled,
the
no
I
next
mechanism
happened
over
two
separate
unrelated
to
PRS,
and
so,
like
I
mean,
if
it
were
just
one
PR
that
would
you
know,
be
one
thing
but
like
it
kind
of
happened
in
two
different
steps,
so
it
was
harder
to
see
like
what
the
impact
was
gonna
be
and
then
yeah,
because
the
Google
search
changes
like
happen
on
a
timescale
week,
so
it'll
be
a
while
before
we
see
results
return
to
normal
yeah.
C
The
issue
was
reported
by
an
outside
user.
Actually,
I
think
it
was
a
couple
users
if
I
remember
correctly,
but
then
they
report
to
github
and
also
I,
think
people
ping
me
in
slack
as
well.
So
we
got
a
good
response
when
once
it
became
a
parent,
so
the
action
items
to
kind
of
fix
things
and
prevent
this
from
happening
in
future
are
to
I
guess
finish:
investigating
the
incident
and
see,
if
there's
any
other
recommendations
we
should
make,
and
then
we
need
to
create
mechanisms
to
detect
future
incidents.
C
So
I
think
that
would
be
like
training.
Some
sort
of
monitoring
that
would
you
know
on
the
production,
cyclist
know
if
these
sort
of
things
happen
again
and
then
we
want
to
mitigate
future
incidents,
so
have
sort
of
a
default
state
where
the
the
no
index
hitter
is
not
present
right
and
so
that,
if
something
happens
like
the
failsafe
is,
will
be
a
reasonable
state
that
we're
okay
with
and
then
we
want
to
prevent
future
incidents.
C
Let's
see
so
while
detection
mitigation
steps
are
outlined
immediately
above
website
builds
should
fail
if
improper
headers
are
included.
Ok,
so
then
maybe
put
some
test
mechanisms
in
so
the
build
will
not
complete
if
they
don't
like
and
if
they
don't
fulfill
these
checks
and
then
lastly,
ensure
institutional
knowledge,
which
is
I,
think
we
should
Tim
I,
think
formally
like
handoff,
like
sort
of
the
ownership
of
the
critical
infrastructure
CNC
up
and
like
make
sure
it's
documented
and
then
like
Zac,
you
and
Luke
and
stuff.
C
C
A
Thank
you
for
putting
this
together
Andrew.
Thank
you
for
wrangling
this
process,
it's
as
an
awesome,
blameless,
postmortem
yeah,
that's
a
while
we
were
discussing
it
I
added
a
bit
on
the
fly
to
things
that
went
poorly
specifically,
that
I
approved
Luke's
original
key
are
without
any
mind
for
the
downstream
impact
that
it
created
so
and
without
enforcing
the
gay
check
of
hey.
This
is
a
new
feature
proposal.
A
D
I
attempted
to
open
this
link
on
my
work
machine.
Apparently,
we
are
not
allowed
to
open
up
links
that
go
to
China,
which
I
guess
that
makes
sense
so,
but
I
was
able
to
open
the
PR
and
the
issue.
So
those
links
are
there
me,
but
basically
one
of
them
just
adds
a
huge
snippet
of
JavaScript
I
would
ask
the
individual
who
submits
Matt,
math
Kosta,
you
know
I,
should
probably
this
would
be
helpful
if
you
guys
could
see
what
I'm
talking
about
my
guess
is
all
right.
One,
two
three
so.
D
This
basically
looking
at
the
file
change
other
than
some
CSS
stuff,
it's
just
a
snippet
of
JavaScript,
where
basically
onload
of
the
initial
website.
It
calls
out
to
IP
info
dot
IO,
and
if
the
response
country
is
not
is
not
coming
from
China,
then
it
will
render
the
Google
search
results.
Otherwise
the
default
is
under
Bing
search
results.
So
what
that
basically
means
is
that
if
I
read
this
right
is
this
nifty
little
search
bar
right
here
will
switch
to
show
in
Google
search
results
versus
Bing
search
results
depending
on
how
that
initial
curl
works.
D
D
That's
so
that's
that
so
basically
I
think.
That's
completely
fine
with
me.
Unless
people
have
a
suggestion,
Dan
basically
said
in
the
issue,
but
I
would
prefer
instead
of
doing
is
it
bei
do
or
by
do
that
by
do
instead
of
I
do
search.
Bing
has
a
nice
alternative
because
they
have
you
eyes
in
many
languages,
I
didn't
have
a
personal
preference,
so
this
looks
good
to
me,
assuming
it
actually
works.
D
F
I
just
had
a
thought
about
the
Google
versus
Bing
search
that
I
wonder
if
it's
better
instead
for
us
to
give
people
a
choice
about
what
search
engine
they
use
and
then
maybe
like
in
the
chooser
thing,
have
something
that
Gray's
it
out
if
they're
in
China,
Gray's
out
Google,
if
they're
in
China
and
then
like,
we
can
have
a
tooltip
that
says
it's
not
available
from
with
your
IP
address
or
something
like
that.
It
seems
like
doing
it
transparently.
F
In
the
background,
is
kind
of
not
super
transparent,
I,
don't
know,
I
have
a
funny
feeling
about
it.
I
don't
even
know.
I
have
a
funny
feeling
about
it.
It
just
sort
of
seems
like.
G
So
so
I
think
your
instincts
are
very,
very
sharp
here
right.
The
the
problem
that
I
think
you're
articulating
is
that
we
have
arbitrarily
picked
winners
and
losers.
You
know
well
China,
they
get
thing
but
the
rest
of
the
place
Google
and
there
is
a
potential
for
the
perception
being
that
you
know
well.
This.
G
It
gives
you
that
community
feel
of
listen
well,
we're
giving
you
the
right
choices
we
can
tell
based
on
you,
know
the
country
you're
in,
but
we
didn't
have
a
preference
for
an
open-source
community,
we're
not
picking
when
there's
some
losers
on
on
this
between
you
know
and
again,
I'm
coming
from
the
company
that
doesn't
have
a
search
engine
right,
so
I'm,
neutral,
I.
Think
we
get
you.
D
G
F
D
F
It's
interesting
sorry,
Cody
I
keep
talking
in
Cody's
audio
zone.
It's
interesting
to
me
that,
by
providing
choices,
we
can
then
backfill
and
get
metrics
to
back
up
the
choices,
because
we
can
say,
look
we're
giving
you
the
choice
and
most
people
are
choosing
Google.
So
the
reason
why
we
have
Google
searches
that
default
is
informed
by
metrics,
instead
of
informed
by
the
way
that
it
always
was
like
I
think
that
we
can
all
hypothesize
that
people
are
going
to
choose
Google
by
default
if
they
have
a
choice.
F
For
the
most
part,
I
mean
the
market
seems
to
back
that
up,
but
we
I
don't
think
we
actually
have
a
way
to
say.
We
chose
Google
search
because
people
wanted
us
to
have
Google
searches,
their
fault.
It
seems
like
an
interesting
opportunity
to
like
we
can
get
analytics
based
on
people,
changing
that
we
can
here
and
put
like
a
analytics
event
specifically
on
the
toggle,
to
see
what
percentage
of
people
actually
ever
change
the
default.
This.
A
A
But
yeah
just
kidding
if
we
could
get
Oh
who's,
the
the
Chinese
maintainer
is
I,
can't
remember
the
user
name
but
yeah.
If
we
can
get
the
the
Chinese
docs
team
to
this,
get
hopefully
input
that
at
least
visibility
into
the
process.
They
might
be
able
to
provide
some
clarity
and
and
provide
their
if
they
have
if
they
have
data
of
their
own,
about
search
engine
preferences
that
might
be
useful
to
come
into
play.
There
may.
F
Be
it
also
Scala
text
here
that
we're
not
familiar
with
about
what
search
engines
people
are
using
in
China.
A
A
D
D
A
A
D
D
That's
right,
okay,
so
so
the
the
docks
deadline
is
seven
days
from
today.
Are
the
dog
sorry
docks.
Pr,
open
deadline
is
seven
days
from
day,
so
now
it's
gonna
end
up
being
time
for
Jim,
Tim
and
I.
That
is
an
awesome
name
for
a
sitcom
to
be
able
to
kind
of
start
harassing
lovingly
people
into
just
having
an
open
PR
against
the
112
branch.
D
G
D
D
It
is
a
merge,
not
a
rebase,
so
so
history's
will
definitely
kind
of
be
getting
interesting,
but
basically
all
the
pr's
are
gonna,
be
opening
it's
the
release,
112
branch
and
then,
as
they
get
approved
by
people,
I
will
merge
them
into
the
112
branch
and
then
have
a
final
PR
that
goes
into
master,
and
if
someone
has
a
disagreement
with
that,
because
I
have
not
thought
of
something
that
could
go
horribly
wrong
in
gif,
I
would
be
I
would
love
to
have
that
feedback.
Okay,.
D
D
F
F
So
whenever
you
do,
whenever
I
was
force,
pushing
into
one
eleven
to
do
to
rebase
against
master
I
would
then
have
to
go
and
manually
rebase
each
open
pull
request
which
it
wasn't
terrible,
but
it
wasn't
the
best
and
there
a
few
sigdoc
contributors
who
do
not
allow
us
to
come
in
into
their
PR
branches
like
explicitly
and
intentionally
don't
allow
us
to
which
meant
that
those
few
people
had
to
go
and
do
that
work
themselves.
Every
time
I
force
pushed.