►
From YouTube: Kubernetes Community Meeting 20171012
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
Welcome
everybody
kubernetes
community
meeting,
October
12th,
the
we've
got
we're
back
to
a
regular
schedule.
After
last
week's
1.8
Update
as
a
warning,
though,
we
may
jump
slightly
out
of
order,
just
because
one
of
our
sick
of
dates
has
a
hard
stop
at
10:30.
So
I'll,
let
you
know
if
that's
happening
depending
on
how
things
go,
but
we're
going
to
start
out
with
a
demo
of
a
relatively
new
projects,
not
all
that
new,
but
relatively
new
product,
certainly
new
to
getting
to
full
capabilities
which
is
Qbert
and
Fabian.
B
Do
you
see
my
screen?
Yes,
that's
working
cool,
so
hello,
my
name
is
Fabian
Deutsch
and
I'm.
Working
at
redhead
and
I'm
working
on
Qbert,
so
cube
root
is
project
which
is
about
bringing
virtualization
to
communities.
We
want
to
run
virtualization
work
both
on
kubernetes.
That
means
classical
virtual
machines
in
that
sense,
like
we
know
them
from
from
that
era
we
are
in
today
and
cuber
is
about
providing
a
virtualization
api,
enter
runtime
for
community
or
virtual
machine,
in
a
nutshell,
we're
using
we're
providing
that
virtualization
api
using
crts
and
the
operation
pattern.
B
Our
virtualization
runtime
stack
is
provided
by
a
VM
or
limited
to
intentionally
limited
to
k,
vm
q,
music,
bird
of
the
north
side
and
the
design
goal
is
to
provide
cubed
F
an
add-on
to
kubernetes,
so
that
you
provide
everything
you
need
in
containers.
Like
you
know,
oedema
said
leveraging
diamond
set
deployments
ingress
and
whatever
you
need,
depending
on
the
depending
on
beam.
B
Because
there
are
already
a
few
projects
with
virtualization
of
the
community
space
I
just
want
to
differentiate
Qbert
from
the
CRI
CRI
approaches
so,
for
example,
vert
led
frak
T
or
the
EOC
OC
ICC
runtime.
They
effectively
CRI
approaches
so
different
CRI
implementations
instead
of
running
containers,
they're
running
VMs
and
then
eventually
containers
inside
those
VMs.
The
thing
here
is
that
those
implementations
are
limited
to
to
the
pot
spec
to
express
what
kind
of
VM
is
getting
created.
B
You
can
you
can
do
some
modifications,
like
you,
can
use
annotations
to
provide
some
more
additional
details
to
the
CRI
runtime
to
two
specific
VM
setup,
but
for
Hubert
we
went
the
other
way
and
we
said
we
do
the
integration
of
virtualization
workloads
on
the
API
level.
So
we
decided
to
go
with
your
dedicated
API.
B
Also,
we
have
some
functionality.
Life
is
like
live
migrations
which
are
just
not
there
in
communities.
I
know
it's
been
talked
about
to
migrate
pods,
but
when
a
we're,
not
sure
that
will
ever
come
or
not
inviting
the
virtualization
world
live.
Migration
is
the
critical
feature,
so
we
also
provide
an
API
for
migration
yeah,
and
so
that's
it
on
the
details
on
the
other
theoretical
detail.
So,
let's
take
a
look.
How
that
looks
and
reality.
B
All
right,
so
we've
got
a
demo
and
you
will
find
the
links
in
the
slide.
Deck
and
I'll.
Add
that
to
the
minute
right
up
in
this
demo,
we've
got
a
demo
with
just
based
on
mini
cube.
So
if
you've
got
many
cube
deployed
and
I've
got
that
on
my
system
here,
then
you
can
run
the
demo
as
I've
done
it
here
above
and
it
will
check
for
tube.
Ctl
will
check
for
me,
cubed
check
out
cupid
for
you
and
then
it's
deploying
the
manifests.
B
We
provide
then
string
some
checks
until
qubit
is
really
ready
and
deployed
on
on
your
mini
tube
setup.
It
might
take
a
while.
That's
why
I
prepared
it,
because
our
images
are
currently
a
little
large
because
there's
all
the
debug
information
in
them,
but
once
it
is
deployed
you
can,
you
can
use
cuba,
so
the
most
intentional
way
to
see
if
it
working
is
to
say
get
me.
Ms,
and
then
you
see
that
in
this
case
we've
got
a
test
being
deployed
before
we
dive
into
the
BM.
Let's
take
a
look
how
that
is
provided.
B
B
This
is
a
knife
at
the
demo
target
that
we
are
able
actually
to
use
PVCs
and
we're
using
it
directly
right
now,
but
you
could
also
use
P
disease.
There's
the
liquid
image
to
bring
up
the
VM.
We've
got
a
spice
proxy
to
give
users
access
on
the
outside
of
the
cluster.
To
begin
running
inside
the
cluster
got
the
vert
API,
which
is
doing
the
validation
of
the
series
before
they
really
are
passed
to
kubernetes
referred
a
vert
controller,
which
is
effectively
the
operator
operating
on
the
CR
DS
we
have.
B
The
vert
handler
would
just
be
no
solid
components.
Speaking
to
the
Burgess
and
monitoring
the
CR
DS
and
speaking
to
the
bird
to
react
to
changes
and
we've
got
the
ver
launcher
when
launcher
is
actually
that
hot
shell,
where
the
VM
is
running
and
the
verb
manifest.
You
can
ignore
that
for
now
a
quick
look
at
the
series
who
provide
you
can
give
to
one
for
migrations
and
one
for
the
MS.
B
B
So
I
cat
it
DVM
llamo-
and
you
see
here
we
are
using
the
Qbert
API.
You
can
put
the
YouTube
stuff
into
the
metadata
section,
so
we've
got
some
support
for
selectors
and
we
are
working
in
affinity
and
anti
affinity,
but
the
interesting
part
is
the
spec.
So
here
you
see
that
we
can
really
build
a
domain,
so
you've
got
a
list
of
virtual
devices.
It's
the
graphics
device,
a
network
interface.
B
B
So
let's
take
a
look
at
the
output.
The
status
is
interesting,
so
it
has
already
started-
and
you
see
here
that
the
graphics
console
is
now
exported
and
that
that
is
running
on
the
mini
cube
node,
that's
obvious!
What
you
can
do
now
is
you
can
actually
connect
to
that
VM
and
the
most
interesting
part
is
that
you
can
connect
to
the
virtual
to
the
graphical
console.
So
it's
not
really
that
it's
not
like
a
cube
City,
Electric
or
attached,
but
it's
really
graphical
console
we're
using
the
spice
protocol.
B
Finger
normal
around,
and
here
we
are
so
that
is
really
the
the
graphical
console
of
the
VM
currently
running.
It's
an
alpine
linux
image,
because
I'm
one
machine
is
not
deep
enough
to
much
anything
anything
larger
at
the
moment.
So
that's
the
that's
interesting
that,
from
the
runtime
side
on
a
single
node,
one
additional
interesting
thing
I
want
to
highlight
is
actually
that
what
I
mentioned
we've
got
migration
API,
so
mini
cube
as
you've.
Seen
on
this
shell
health
has
a
single
node,
but
I've
also
created
a
prepared.
B
Our
developer
set
up,
which
is
based
on
vagrant
and
which
are
now
showing,
and
we've
got
two
notes
running
there,
so
master
note
and
the
single
worker
node,
and
we
also
got
a
VM
running
here.
It's
a
test
game
again
now
what
we
can
do
now
is
we
can
say
we
can
take
a
look
where
that
VM
is
running.
You
see
it's
currently
running
on
the
master.
Once
you
can
do
migration
now,
once
you
do
migration,
it
will
land
on
a
different
note.
So
ok
create
F
cluster.
B
B
B
B
B
B
It
succeeded
now
lucky
I
am
and
then
now
it's
running
on
the
master,
so
it
was
scheduled
initially
on
unknown
zero
and
now
it's
running
on
the
master,
so
that
worked
yeah.
That's
all
I
wanted
to
show
so
I
want
to
give
you
a
brief
overview
over
the
VM
API
of
that
is
looking.
We
looked
at
the
vapor
and
set
up,
and
that
is
just
a
quick
glance
at
the
future,
we're
working
on
making
it
even
easier
to
be
reliable
on
every
communities
cluster.
B
One
piece
of
that
is
doing
working
an
API
of
aggregation
to
get
a
get
rid
of
our
work
rounds.
We
want
to
do
what
networking
properly
to
connect
up
with
the
networking
side
and
if,
in
the
physical,
in
the
real
ID,
we
don't
have
a
virtualization,
a
virtualized,
what
most
saloon,
fq4
North
America
we've
all
gathered,
virtualization
related
projects
of
Birdland
from
Iran's
is
hyper
and
Intel
to
talk
about
all
the
virtualization
related
aspect
and
here's
actually
my
request
for
this
forum.
B
B
What
I
would
really
like
to
ask
is:
is
there
a
chance
to
get
a
virtualization,
zig
or
working
group
and
to
which
the
big
should
we
be
attached
and
doesn't
make
sense,
on
the
other
hand,
to
say
that
as
we're
working
on
virtualization
API,
which
is
interesting
eventually
to
other
users
as
well,
if
this
could
be
an
incubator
project,
so
I'm
unsure
on
both
point?
That
is
what
I
wanted
to
race.
B
C
C
One
thing
that
I
think
is
interesting:
I'd
love
to
get
your
thoughts
on.
It
is
the
difference
between
running
a
VM
in
a
pod,
which
is
essentially
what
you're
doing
which
the
VM
is.
The
first
class
thing
that
the
user
gets
delivered
to
them
versus
running
a
pod
in
a
VM
which
is
using
a
VMs
virtualization
technology
for
running
the
pot
is
an
isolation
mechanism
right
and
and
they're
both
involve
virtualization,
but
it
seems
like
there's
really.
Those
are
two
very
different
scenarios
in
terms
of
what
what
the
experience
is
to
the
end-user
yeah.
B
Yes,
I
agree:
the
experience
really
is
different
different
and
it's
a
reappearing
question.
So
my
take
is
I
think
we
will
continue
to
people,
so
we
need
that.
You
know
that
transparent
use
of
VMs
in
the
isolation
say
Google
needs
that
explicit
use
for
if
you
want
to
run
the
end,
you
know
what
I
think
we
should
have
that
room
to
discuss.
B
If
there
are
point
of
alignment,
because
we
run
the
VM
in
a
pod
and
that
has
benefit
actually
I
think
it
has
strong
benefits
because
it
makes
kubernetes
staying
nice
because
you
just
need
to
take
care
about
pods
and
and
don't
consider
it
a
big
hypervisors
if
that
has
ever
needed
mm.
But
if
we
want
to
consider
that
route,
then
it
makes
sense
to
to
find
point
of
alignment
between
both
use
cases.
There
are
opportunities,
but
it's
also
not
that
easy.
D
I
think
actually
multi-tenancy
is
a
good
stick.
Just
I
want
to
address
the
quick
comment
here.
Instinct
I
would
encourage
folks
to
work
on,
and
this
is
kind
of
just
there's.
A
lot
of
anything
can
be
a
lot
of
crossover
between
notes
and
see
work
in
this
birth
defect
as
people,
if
you,
if
you're
looking
for
people
to
work
with
that,
would
give
all.
E
Right
people
that
pick
that
also
this
is
this
is
my
Grubin
from
google,
david
Oppenheimer
is
doing
a
lot
of
work
on
this
with
Clayton
Coleman
right
now
and
so
he's
exploring
both
you
know,
scheduling
and
security
and
multi-tenancy
and
namespace
work.
So
you
know
if
anyone
is
interested
in
these
areas,
I'm
not
quite
sure.
If
this
is
a
sig
thing
or
you
know,
work
group.
E
F
Yeah,
so
from
far
away,
whose
name
escapes
me
I
sincerely,
apologize
has
also
someone
who
Quinton
referred
me
to
is
also
interested
in
starting
some
kind
of
multi-tenancy
working
group,
and
so
we
have
started
discussing
that
and-
and
we'll
have
news
on
that
soon.
So
we
expect
there
will
be
a
working
group
under
the
auspices
of
Sigma
off
most
likely
sometime
in
the
near
future
date
was
David.
Can.
E
For
sure
I
mean
we,
yes
definitely
that
sounds
great
because
it
seems
like
there's
just
so
many
folks
across
the
community
or
interested
in
this,
and
it
actually
showed
up
in
the
storage
face-to-face
a
little
bit
also
this
week.
So
you
know,
having
a
lighthouse
for
all
of
us.
To
you
know,
sail
towards
would
be
really
helpful.
Yeah.
F
A
F
Yeah
sure
and
I
think
Tim
Sinclair
Tim.
Are
you
also
here
we're
gonna?
Do
this
I
think
together
or
I?
Don't
know
I'll
start
and
hopefully
Tim
is
online,
so
I'll
just
go
real
fast
briefly,
with
our
1.9
plans
for
cig
scheduling,
one
of
them
is
reducing
open,
bugs
and
stability.
That's
that's
a
key
thing.
We
have.
We
noticed
that
there's
a
large
number
of
open
issues
that
are
not
like
don't
appear
to
be.
You
know
super
dangerous,
but
are
kind
of.
F
Why
is
this
happening
kinds
of
things
that
we
need
to
dig
into
and
we've
been
putting
off
so
reducing
open
bugs
is,
is
high
on
the
list
for
1.9
Bobby
has
been
working
on
priority
and
preemption.
It
was
alpha
in
1.8.
It
will
stay
alpha
in
1.9.
There
are
some
issues,
some
known
issues
with
it
that
he
had
been
planning
to
address
which
he
will
address,
but
we
given
the
short
release
cycle
from
point
9.
F
But
the
idea
is
to
to
improve
the
user
experience
for
those
and
and
make
sure
that
people
can
use
it
easily
when
they're
using
these
features
so
that
they
don't
get
surprised
by
preemptions,
because
this
is
a
fairly
disruptive
new
feature
once
people
start
using
priority.
Of
course,
if
you
use
priority,
then
nothing
will
happen
and
and
there's
no
change
of
behavior,
but
people
who
do
use
it.
F
We
don't
want
them
to
get
surprised
by
by
what
happens
so
so
he's
gonna
be
working
on
that
and
also
trying
to
find
some
real
users
to
try
out
to
try
out
the
the
feature
at
scale
and
then
another
part,
the
last
piece
of
that.
Besides
fixing
some
of
these
open
issues
with
it
is
adding
priority
to
resource
quota.
I've
noticed
actually
there's
a
lot
of
work
going
on
in
resource
quota
in
parallel.
I
hope,
folks
are
coordinating
to
some
extent.
F
It
seems
like
there's
at
least
three
or
four
different
teams
that
are
all
adding
new
resources
to
resource
quota.
The
priority
work
isn't
actually
adding
a
new
resource,
it's
basically
adding
another
dimension
to
resource
quota
so
that
the
quotas
are
by
namespace
and
by
priority,
by
kind
of
essentially
the
cross
product
of
the
two
and
there's
an
open
PR
proposal
from
the
community
on
how
to
do
that
from
our
es.
So
you
are
one
of
the
one
of
the
folks
and
the
cig
scheduling
community
on
how
to
add
priority
to
resource
quota.
F
F
It's
also
bad
in
general
to
have
two
copies
of
the
code,
and
we
never
really
tried
to
turn
anything
into
a
library
and
reuse
it.
So
we
just
have
two
totally
different
implementations
at
the
scheduling
logic.
On
the
other
hand,
there
are
some
downsides
of
trying
to
use
the
default
scheduler
to
schedule.
Daemon
sets,
even
though
it
sounds
like
a
great
solution
at
first
and
I
was
very
excited
about
it.
So
there's
an
open
issue.
If
people
want
to
see
what's
going
on
there,
it's
four
to
zero,
zero.
F
So
we
want
to
get
that
resolved
and
and
implements
it
if
we're
going
to
do
it
during
1.9,
mostly
just
a
couple
of
minor
things,
there's
gonna
be
continuing
work
on
the
D
scheduler,
which
is
the
new
name
for
the
reschedule
err
that
shouldn't
have
been
called
to
reschedule
err
in
the
first
place.
This
is
in
Cuba
native
incubator.
F
This
is
the
thing
that
you
can
think
of
as
moving
pods
around
to
meet
policies
like
spreading
out
pods
when
they're
on
overly
utilized
nodes,
and
things
like
that,
like,
for
example,
after
his
own
failure
zone,
comes
back
up
the
zones
empty.
You
want
to
move
some
of
the
already
running
pods
onto
those
nodes
that
are
in
the
recovered
zone,
otherwise
that
so
it
just
stays
empty
for
a
long
time.
F
It
does
things
like
that
and
a
vet
from
Red
Hat
has
been
working
on
it
and
some
other
people
as
well,
and
it's
an
incubator
and
there
will
be
continued
work
on
that.
It's
not
tied
to
the
kubernetes
release,
cycle
itself
and
yeah
the
rest
of
the
the
rest
of
the
things.
I,
don't
know
if
it's
worth
going
into
in
the
Tim.
Do
you
think
any
of
those
remaining
issues
are
worth
talking
about?
You're,
welcome
to
go
ahead
and
talk
about
those
we.
H
Can
just
put
notes
in
in
the
meeting
notes,
I
think
the
one
thing
I
wanted
to
spell
out
too
was
with
regards
to
priority
and
preemption
I
forgot
the
name
of
the
original
implementation
for
infrastructure
pods,
that
we
wanted
to
never
be
preempted,
but
that
original
implementation
will
probably
be
deprecated
in
favor
of
priority
and
preemption
going
forwards
at
some
point
in
time.
Yes,.
F
Yeah,
the
the
critical
critical
pod
annotation
thing,
like
Mike
Tim
said,
will
be
deprecated
once
once
this.
Presumably
once
a
party
and
preemption
goes
to
beta,
which
is
when
it
will
be
enabled
by
by
default,
and
then
we
can
get
rid
of
get
rid
of
the
the
critical
pod
annotation
and
the
associated
logic,
which
we
were
unfortunately
calling
every
scheduler
that
that
can
all
go
away.
A
Okay,
well,
thank
you
very
much.
I
have
more
in
the
notes.
Do
you
have
questions
for
them?
Please
do
it
in
the
chat,
the
I
think
for
the
sake
of
the
stream.
We
will
finish
up
the
sig
updates
and
then
go
back
to
the
release
updates.
So
I
first
want
to
see.
Erin,
quick
and
Berger
is
not
on
the
call
or
a
representative
from
cig
testing.
Just
to
confirm.
A
I
So
nickel,
in
my
notes,
alright,
so
for
docs,
with
a
1.8
release.
Ducks
for
the
1.8
release
are
a
thing
that
happened:
yay
many
thanks
again
to
Steve
Perry
for
serving
the
stocks
Meister
for
1.8
we're
working
on
incorporating
feed
back
from
the
1.8
retrospective,
specifically
I'm
communicating
deadlines
effectively
and
clearly
that
was
one
of
the
big
pieces
of
feedback.
Is
that
deadlines
seemed
fuzzy
for
Docs,
and
so
we
definitely
like
to
make
that
clearer.
41.9
1.9
is
coming
up.
I'm
the
docs
meister
41.9
and
I'd
really
love
to
communicate
deadlines
clearly
and
effectively.
I
So
that's
something
that
will
get
a
lot
of
focus
talking
about
our
Chinese
translation
milestone,
so
sig
Doc's
had
our
first
sinc
meeting
with
the
Chinese
translation
team.
That
meeting
went
very
well
and
over
the
course
of
it,
some
things
became
apparent.
Internationalization
requires
feature
implementation
at
different
layers
of
the
docs
stack.
It
requires
feature
implementation
at
the
authoring
layer.
Where
do
we
store
translated
files?
For
example?
That's
a
question
about
how,
specifically
we
do
that
it
requires
future
implementation
at
the
deployment
level.
I
I
I
So
we're
working
to
bring
the
Chinese
translation
team
into
the
kubernetes
org
and
to
bring
their
to
their
translation,
workflow
and
their
translation
materials
into
the
repo
we've.
Also
added
some
members
of
the
Chinese
translation
team
as
collaborators
for
the
docs
repo
to
help
with
PR
approval
and
to
make
sure
there's
somebody
who
is
able
to
understand
and
give
a
thumbs-up
and
thumbs-down
meaningfully
to
Chinese
language
PRS
is
participating
in
the
workflow.
I
So
the
TL,
the
TLDR
for
Chinese
translation
for
internationalization,
is
that
it
is
a
lot
of
work.
But
it's
awesome.
Other
sig
updates
at
the
last
sig
Docs
meeting
earlier
this
week.
We
considered
and
decided
to
approve
renaming
the
Docs
repo
and
we're
going
to
be
going
from
kubernetes,
slash
kubernetes
that
github
that
io2
kubernetes
slash
website
and
Andrew
Chen
is
going
to
be
the
one
who
implements
that
cut
over
and
I.
Think
and
Dermott
am
I
remembering
correctly
that
we're
going
to
do
that
in
a
month.
Yeah.
J
I
Excellent
other
good
news,
our
open
our
q4,
open
PRS
in
the
Docs
repo
is
now
below
40.
We
have
fewer
than
40
open
PRS
in
Docs.
That
is
largely
due
to
the
the
work
of
Steve
Perry,
who
something
flipped
inside
of
him,
and
he
has
decided
that
PRS
are
the
enemy
and
has
decided
to
close
all
of
them
in
a
high-quality
way
and
there
will
be
no
more
PRS
ever
for
Docs
and
I.
Think
it's
wonderful
and
it
has
made
our
workflow
much
more
manageable.
So
many
thanks
to
Steve
berry.
I
So
it's
a
little
bit
of
crystal
ball
gazing.
It
would
be
awesome
to
bring
the
kubernetes
blog
into
the
workflow
for
the
rest
of
the
website.
Currently,
though,
the
blog
is
the
only
piece
of
content
that
doesn't
reside
inside
of
the
actual
documentation,
repo
we'd
like
to
change
that
the
work
involved
is
not
is
not
trivial,
but
it's
also
not
unreasonable.
I
So,
if,
if
anyone
is
looking
to
contribute
to
the
docks
to
the
sig
Docs
and
if
you
have
experience
migrating
content
from
bloggers
to
markdown
or
experience
with
CSS
and
page
layouts
and
if
you'd
like
to
contribute,
please
contact
me,
you
boy,
do
we
have
a
project
for
you
and
that's
it
from
my
side?
Andrew.
You
want
to.
J
And
glass
for
yourself
yeah,
so
we're
just
about
to
put
out
glossary,
which
is
it's
a
pretty
cool
framework
that
Jessica
did
Jessica
out
from
Hebdo
did
really
great
work
on.
So
the
idea
is
like
we
can
have
like
these
term
terminology
snippets
in
a
directory,
and
then
we
have
a
page
that
auto
generates
like
the
glossary.
That
way,
we
can
reuse
the
terminology
throughout
the
site
and
it's
sort
of
a
like
dynamic.
So
I
put
a
preview
link
in
the
notes.
J
If
you
want
to
take
a
look,
but
I
don't
want
to
take
up
too
much
time
here.
So
then
we're
also
the
other
big
project
going
on
is
user
journeys
so
we're
in
the
process
of
like
finishing
sort
of
hashing
out
the
design
and
content
or
like
the
app
developer.
And
then
we
have
a
few
more
that
we're
going
to
do
that.
We're
gonna
put
out
as
part
of
the
MVP
and,
let's
see
with
Paris's
help.
J
We
also
sent
out
a
community
survey
that
has
a
couple
questions
about,
like
you
know,
which
kind
of
roles
and
personas
that
people
see
themselves
as
and
which
ones
think
are
the
most
important.
That
should
help
us
prioritize,
which
ones
that
we
implement
and
then
we're
gonna
try
to
fold
in
a
couple
of
the
other
ones
like
Co
contributors
and
platform
developers,
but
that's
a
little
bit
further
down
the
line.
Let's
see
and
then
Steve
Perry
is
taking
ownership
of
the
auto-generated
Docs,
which
is
awesome.
J
J
A
K
Well,
this
is
Anthony,
so
we're
looking
to
get
started
next
week
on
the
1.9
release
team
activities.
We
do
have
a
couple
of
roles,
though,
that
are
not
filled
yet,
including
the
secondary
released,
lead
and
test
signal
and
some
important
roles
there.
So
please
go
there's
a
link
to
a
spreadsheet
in
the
meeting.
Notes,
click
on
that
and
you
can
just
put
your
name
in
the
list
and
I'll
be
trying
to
get
in
touch
with
you
next
week.
Thank
you
to
everyone,
including
Jack
who's,
already
volunteer
to
help
out
with
the
release.
K
K
So
you
know
keep
that
in
mind
when
you're
deciding
what
you're
going
to
try
to
commit
to
and
the
reason
it's
because
the
this
quarter
is
going
to
be
a
short
one
with
the
holidays,
we're
targeting
December
13th
middle
of
the
month
for
the
final
release
date
and
then.
Lastly,
an
early
warning
to
seek
storage
and
Signet
work.
Please
go
and
look
at
the
release
blocking
test.
It
is
also
a
link
in
the
notes.
A
H
H
So
that
should
be
I,
think
all
fixed
up
now
and
I
wanted
to
do
a
quick
public
service
announcement
about
cherry-picks,
because
there's
a
lot
of
a
lot
of
questions
about
how
best
to
do
that
and
actually
there's
a
great
doc
that
exists
and
it's
linked
in
the
notes.
It
explains
the
process
and
how
that
gets
done
and
that
will
also
help
Joe
be
more
effective
in
getting
the
cherry-picks
done.
If
you
follow
the
process
and
use
the
cherry-pick
tool
so
yeah.
H
A
A
G
Alright,
so
just
a
quick
note
that
is
formally
called
dev
summit,
we
are
going
to
be
renaming
it
to
contributor
summit
as
it
best
matches
with
several
initiatives
going
on,
including
what
you
just
heard
from
Andrew
with
personas
developer,
indicates
to
a
lot
of
folks
that
they
are
building
on
top
of
the
API.
So
your
rightful
name
should
be
upstream
contributors,
so
we
are
naming
this
two
contributor
summit.
The
date
and
venue
has
been
confirmed.