►
From YouTube: Kubernetes WG LTS 20190402
Description
https://git.k8s.io/community/wg-lts
Meeting agenda / minutes:
https://docs.google.com/document/d/1J2CJ-q9WlvCnIVkoEo9tAo19h08kOgUJAS3HxaSMsLA/edit?ts=5bda357d#bookmark=id.udhx5g1vxxls
A
A
All
right,
it's
two
minutes
past:
let's
get
this
party
rolling,
I
pasted
the
doc
in
the
zoom
I'm
guessing
you
all
already
have
it.
This
is
the
April
2nd
2019
working
group,
long
term
support
meeting
the
meeting
is
being
recorded.
We
will
post
it
to
YouTube
just
afterwards,
so
everybody
please
adhere
to
the
community
code
of
conduct
and
be
good
people,
because
the
world
will
see
so
in
the
document
in
the
agenda.
Doc.
We've
got
a
couple
things
there.
A
If
there's
other
things,
you
want
to
talk
about,
feel
free
to
throw
them
in
there,
I'm
actually
gonna
change
the
order
slightly
I
think
one
of
these
will
be
conversation
and
longer,
maybe
than
the
others.
So
the
the
first
thing,
just
as
an
FYI
in
the
prior
meeting
we'd
been
talking
about
trying
to
get
on
the
cube
con
EU
agenda
and
I
chatted
with
the
program
folks
and
they
added
us
for
a
deep
dive.
A
So
we
can
have
a
something
along
the
lines
of
what
we
did
at
cube:
con
North
America
in
Seattle
and
have
a
conversation
about
where
we
are
I.
Think
I'll
do
sort
of
the
same
thing
where
it's
one
slide
and
then
conversation,
hopefully
and
we'll
see
how
that
goes,
and
the
expectation
is
will
have
the
survey
results
at
that
point
and
have
also
done
some
discussion
of
a
couple
of
specific
proposals
and
there's
a
variety
of
other
things
around
discussion
that
are
in
flight
Donald.
B
B
Hee-Haw
responded
saying
we
should
probably
avoid
data
sharing
with
everyone,
especially
when
it
contains
IP
address
and
email.
Ids
should
scrub
that
out
and
then
share
with
everyone.
I
did
not
look
at
the
complete
survey,
yet
it's
a
multi-dimensional
data
so
to
create
their
to
use.
Excel
tools,
it's
difficult.
We
don't
have
access
to
the
admin
account,
so
it's
going
to
be.
B
It
will
take
some
time
to
pull
out
all
the
data
and
understand
where
it
is.
But
from
the
perspective,
what
we
discussed
in
last
meeting
we
had
about
490
tries
it's
it
against
what
you
know
be
new
earlier
that
we
had
490
people
completing
the
form
they
were
about
240
completions
and
about
50
percent
completion
rate
or
drop-off
rate.
It's
the
same.
That
was
when
I
looked
at
the
data.
It
seems
like
first,
second
or
third
question
is
where
people
stopped.
The
first
question
is:
do
you
use
kubernetes
and
say
yes
or
no?
B
The
second
question
is
the
relevant
question,
which
says:
are
you
which
person
are
you
belong
to?
So
if
people
did
not
answer
second
or
third
questions
and
it
was
dropped
out,
if
seems
like
a
lot
of
those
people
just
started
the
survey
and
did
not
finish
it
because
they
got
busy
with
something
else.
So
looks
like
a
lot
lot
of
drop-off
rate
versus
what
the
50
percent
completion
rate
states
in.
B
If
you
look
at
the
Survey
Monkey
data,
we
don't
have
access
to
Survey
Monkey,
so
not
much
data
got
done,
but
I
try
to
see
if
we
had
genuine
participants
and
people
who
provide
their
emails
are
usually
genuine
participants,
so
about
80,
plus
participants
had
their
email
provided
and
the
crowd
was
pretty
spread
out
and
I.
Also
looked
at
IP
addresses
to
see
if
they
were,
you
know
coming
from
the
same
source.
What
was
the
variance
and
I
saw
that
they
were
very
few
duplicate,
I
piece.
B
And
yeah
so
as
I
get
more
data,
I'll
probably
get
more
details
on
mine.
Yes,
there
was
a
good
trend
towards
cluster
people
who
call
themselves
cluster
operators
and
cluster
users
who
filled
the
data.
I
did
feel
that
some
people
have
repeated
that
like
what
we
have
instructed
in
the
in
the
in
the
survey
itself.
If
you
think
you
belong
to
multiple
profiles,
you
should
probably
fill
this
away
again,
as
second
profile
was
one
of
the
instructions
we
had
passed
down.
B
So
it
looks
like
people
have
repeated
that
so
yeah,
so
more
accurate
numbers,
I've
given
very
approximate
numbers,
rounded
off
them
in
different
places,
so
more
accurate
numbers
once
we
have
better
tooling
I'm,
writing
a
Python
script
to
pull
and
do
some
analysis
on
the
data
on
the
multi-dimensional
data
and
I.
Think
Tim
has
added
some
details
on
the
distribution
of
different
rules.
B
So
if
you
look
at
the
count
of
all
the
data
of
all
the
numbers,
that
Tim
has
provided,
I
I
filtered
out
all
the
blank
responses,
so
that
kind
of
leads
you
to
about
200
participants
who
vary
the
cluster
operators
or
just
our
users,
which
is
a
good
number,
but
I
will
be
more
more
confident
once
I
have
my
script
and
all
the
data
in
the
survey
ends
on
April
26th
recommend
everyone
to
pass
the
URLs
to
everyone.
You
think,
should
fill
the
survey
advertise
advertise.
Hydro
ties
thanks
to
him
back
to
you.
A
So
Josh
mentioned
in
the
zoom,
the
completion
rate
of
50%
like
having
something
for
a
raffle
might
have
made
that
higher,
but
at
the
same
time
240
completions
is
actually
pretty
good.
Looking
at
the
numbers
of
distributors
and
hosting
providers
and
vendors,
we
actually
have
a
fair
amount
of
them
compared
to
like
the
list
of
certified
distros
hosting
providers.
Vendors
because
that
number
it
seems
to
hover
around
70
or
80.
So
I,
don't
know,
I
think
there's
a.
We
definitely
have
a
lot
of
responses.
B
So
one
more
metric
I
got
from
Stephen
was
the
average
time
that
people
took
to
finish
the
service
about
five
minutes,
so
the
the
Survey
Monkey
says
it
takes
about.
It
should
take
about
eleven
minute
max
for
someone
to
fulfill
a
survey
and
the
average
time
we
have
people
spend
on
the
survey
was
5
minutes
and
14
seconds
Josh.
You
wanted
to
say
something
yeah.
C
So
one
question
I
would
have
is
for
this
data
gathering
effort?
Do
we
want
to
do
a
second
data
collection
specifically
for
the
large
distributors,
to
give
us
aggregate
information
that
is
for
Google
and
VMware
and
Red
Hat
and
the
other
large
vendors
to
give
us
aggregate
information
covering
the
same
data
you
know,
but
in
their
case
it's
not
going
to
be,
which
version
are
you
on
it's
going
to
be?
C
Well,
I
mean
because
the
example
is
that
I
mean
the
existing
survey.
That
we
have
is
a
really
good
way
to
cover
cluster
operators.
You
know
and
vendors
that
have
small
numbers
of
customers,
but
the
thing
is:
there's
a
lot
more
people
on
gke
who
haven't.
You
know,
touch
this
survey
and
it's
we
can
get
useful
aggregate
data
from
the
large
vendors
as
a
complement
to
this
individual
survey
response
and
we
could
basically
use
the
same
question.
It's
just
replaced
the
check
boxes
with
numbers,
I.
A
C
C
A
A
Our
very
first
question:
are
you
or
your
customers
currently
using
communities
today?
A
bunch
of
the
blank
responses
were
like
all
of
the
subsequent
answers
are
blank
they're,
their
first
response.
There
was
no,
so
they
like,
they
went
to
the
survey
I'm
like
oh,
no,
this
doesn't
apply
to
me
and
then
they
stopped.
A
I'm
guessing
they
did
it
that
it
did
or
they
broke
off
at
that
point
because
they
third
they
answered.
No,
and
on
that
first
question,
and
then
that
was
it
so
they
had
they've
bailed
from
there,
which
would
make
sense,
I
mean
I.
Guess
that's
a
good
sign,
but
it's
slightly
fascinating
to
me
to
see
that,
based
on
the
way
I
thought
we
described
the
survey
that
people
landed
on
it
and
then
we're
like
starting
to
fill
it
out
and
then
nope
never
mind.
This
isn't
me
well.
C
A
A
All
right,
well,
I,
guess
that's
probably
enough
for
the
survey
topic
but
yeah
we
should.
We
should
look
and
consider
and
I
think
for
the
we'll
want
to
look
at
the
ones
that
are
distributors
host
or
vendor
ones
and
see
if
they
they
left
an
email
address
or
whatever
and
and
see.
If
we
can
figure
out,
do
we
have
the
big
vendors
represented
or
not,
and
we
can
always
try
and
pull
some
additional
data.
I.
A
Understood
a
question
on
the
on
the
zoom:
we
just
got
a
snapshot
of
this
basically
privately,
we're
not
sharing
it
yet
because
we
want
the
survey
to
stay
open
and
be
not
be
influenced
and
we
haven't
done
the
full
anonymization
of
that
data
dump
yet.
But
this
was
just
sort
of
an
FYI
to
us,
so
we
could
start
seeing
what
the
data
isn't
thinking
about,
how
we
would
parse
it
and
process
it.
A
So
we've
got
a
couple
other
things
on
the
agenda
that
are
more
open
into
conversation.
The
first
one
I
wanted
to
bring
up
is
Jordan
started.
A
documentary
I
think
it
was
Jordan.
I
know
you,
you
shared
the
link
to
it,
but
there's
a
document.
That's
link
there
in
the
agenda
called
kubernetes
supportability
external
dependencies.
A
D
Yeah,
so
this
is
sort
of
step
one
or
a
launching
point
for
a
lot
of
different
efforts.
We
have
a
lot
of
a
lot
of
things
going
on
sort
of
at
the
same
time
and
we
wanted
to
make
sure
that
they
were
well
informed
and
that
we
had
eyes
that
kind
of
could
see
everything
that
was
going
on
to
make
sure
we
were
picking
up
any
gaps.
So
if
you
jump
into
that
document,
they
were
I
think
for
kind
of
main
efforts
related
to
getting
a
handle
on
our
dependencies.
D
The
first
is
just
to
find
out
what
the
actual
support
ability
of
our
dependencies
are.
Things
like
go
and
at
CB
and
docker,
and
container
D
or
DNS
like
actually
actually
understand.
How
long
are
they
supported?
How
are
they
supported?
How
do
they
publish
bug,
fixes
and
security
fixes
and
things
like
that
and
are
there
gaps
between
what
they
support
and
what
we
say
we're
supporting?
So
that's
that's
pretty
much
an
information
gathering
exercise.
D
There
are
a
few
question
marks
still
in
there
and
then,
if,
if
there
are
primary
dependencies
that
aren't
on
that
list
at
all,
please
add
them.
Call
them
out
this
isn't
so
much
about
like
the
add-ons
and
things
like
that.
This
is
more
like.
If
you
have
a
cube
or
Nettie's
cluster,
you
are
using
one
of
these
things
and
a
gap
in
support
ability
of
one
of
these
things
would
directly
impact
our
ability
to
support
kubernetes
releases
as
long
as
we
want
to.
D
So
that's
that's.
Getting
filled
out,
that's
helpful
for
identifying
gaps,
and
then
that
gives
us
targets
to
say
all
right.
Well,
Dolan
has
a
12
month
support
window.
What
does
that
mean
for
new
releases
that
we
put
out
you
know?
Do
we
need
to
be
on
the
latest
golang
in
order
to
have
supported
version
of
go
for
the
lifetime
of
kubernetes,
or
do
we
need
to
think
about
how
we
would
switch
and
upgrade
a
release
branch
to
a
newer
version
of
go
like
this
gives
us
information
that
can
create
into
some
of
those
discussions?
D
The
next
item
is
probably
the
biggest
gap
and
the
most
open-ended
and
the
one
where
we
could
use
the
most
help,
which
is
putting
together
processes
to
make
sure
that
we
test
and
ship
things
we
support.
So
every
release
we
at
the
last
minute
update
the
release.
Notes
with
hey
here
are
the
versions
of
external
things,
and
it's
kind
of
just
looking
at
the
list
that
we
had
the
last
release
and
searching
in
yet
for
things
that
touch
those
versions.
D
D
Less
it's
less
of
a
intentional
like
here,
the
versions
we
support
and
why
and
here's
where
they're,
two
being
tested
and
more
of
a
well
I,
don't
know
this
is
what
the
code
says:
we're
shipping
so
I
guess
this
is
what
we're
shipping,
especially
for
the
dependencies
where
we
support
multiple
versions
of
things.
I,
don't
think
we
have
test
coverage
of
all
of
those
versions,
I
think
some
of
it
is
either
manual
or
kind
of
we
test
at
the
top
and
we
test
at
the
bottom,
and
we
just
assume
everything
in
between
works
anyway.
D
This
is
if
I
had
to
pick
a
section.
This
is
probably
the
most
understaffed
and
chronically
problematic
the
other
things.
We
can
get
information
and
figure
out
ways
to
interlock
with
some
of
these
dependencies
on
support
windows,
and
things
like
that.
But
this
is
this
is
all
on
us.
This
is
just
our
release
engineering
test
process.
So,
if
you're
looking
for
a
place
to
contribute,
like
I
said,
this
is
easily
the
most
understaffed
the
next.
D
So
when
I
talked
about
this
in
Sikh
architecture,
I'd
said
the
the
first
few
sections
should
make
us
feel
bad
and
motivate
us
to
just
get
rid
of
external
dependencies
wherever
we
can.
So
that's
what
the
third
section
is
about.
That's
that's
one
way
to
to
clean
our
matrix,
which
is
stop
supporting
so
many
external
dependencies
wherever
possible,
and
so
I
gave
some
examples
of
efforts
that
are
happening
there.
D
We
would
support
so
it
gives
us
a
single
test
target
and
it
gives
vendors
a
single
interlocked
target
for
their
testing
there's.
So
there
are
several
of
these
container
runtime
interface
container
storage
interface
in
your
network
interface.
These
are
at
varying
levels
of
maturity,
so
container
storage
interface
is
actually
the
most
mature.
D
That's
at
1o
in
the
previous
terminators
release,
and
is
it
one
one
in
114,
I
think
so
I
think
the
effort
here
is
to
get
these
API
interfaces
stable
or
if
they
already
are
stable
and
everything,
but
name
like
actually
do
the
work
to
mark
CRI.
Is
it
b1?
If
it
hasn't
changed
in
a
year
and
a
half
and
everybody's
using
it,
we
should
probably
call
it
stable,
yeah.
So
I'd
like
to
see
some
of
the
things
tracking
those
again,
this
list
is
likely
incomplete.
D
If
you
know
of
things
that
are
going
on
to
remove
entry
to
Menzies
and
move
them
out
of
tree,
please
add
them
and
link
them.
So
we
can
have
a
better
picture
of
what's
going
on
and
then
the
the
final
item
in
the
stock
is
actual
dependencies
that
we
have
that
we're
either
unmaintained
or
unmaintained
is
probably
the
biggest
example.
So
there
were
a
couple
couple
dependencies,
glog
and
the
yam,
a
library
we
were
using,
that
we
before
can
brought
into
the
kubernetes
org,
because
the
original
maintainer
x'
weren't,
updating
them
so
yeah.
D
F
D
D
F
D
I
know
that
we
had
a
few
pretty
big
dependencies
drop
in
the
last
release,
had
a
few
long
deprecated
things
around
our
coping,
API
rest
handling,
stuff,
finally
got
removed
and
that
dropped
a
bunch.
There
are
a
couple
cloud
providers,
I
think
three
cloud
providers
that
have
been
deprecated
for
a
while
they're
gonna
get
dropped
in
116,
so
that
will
drop
a
big
chunk.
D
So
I
mean
as
those
drop
their
transitive
dependencies
drop
as
well,
sometimes,
if
we're
still
not
using
them.
So
there
have
been.
There
definitely
been
more
eyes
on
this
area,
and
so
things
that
bring
in
a
lot
of
stuff
to
vendor
are
getting
questioned
now,
so
I
I,
don't
think
we
are
getting
worse.
I
think
there
are
a
lot
of
things
in
flight
that
once
they
land
will
help
a
lot,
and
so
it
really
is
convincing
people
that
these
things
are
worth
finishing
and
and
will
get
a
lot
of
benefit
from
it.
Wilson.
F
One
others
lucky
off
the
topic
question,
but
this
this
go
dip.
Is
it
these
days
I
forget
which
one
it's
a
good
one?
That's
the
original,
the
one
about
the
fifth
like
iteration
of
an
attempt
by
a
go
to
actually
manage
dependencies.
Do
we
have
any
faith
whatsoever
and
I?
Don't
know
anything
about
go
modules,
but
but
all
the
other
ones
were
like
replacements
for
the
previous
crap
one
which
they've
got
deprecated.
D
D
D
D
A
And
I
think
that
the
right
people
are
are
working
on
the
conversations
they're
like
I,
just
posted
a
link
to
Russ
Cox
blog
about
dependency
management
or
one
of
his
more
recent
ones.
Just
from
like
two
months
ago
and
it's
a
solid
read,
this
is
a
fairly
fundamental
problem
in
computer
science,
around
code,
reuse
and
it's
come
to
a
head.
But
there's
a
lot
of
people
in
the
NGO
community
in
the
core
who
have
been
looking
at
this
problem
for
like
a
decade,
plus
even
outside
of
go
and
in
our
as
much
as
it's
painful.
F
B
I
had
a
question
with
respect
to
really
ends
dependencies.
I
see
a
lot
of,
at
least
in
product
development.
I
see
many
times.
You
don't
consider
the
life
cycle
of
relents
dependencies,
for
instance,
go
has
12
months
support
cycle.
There
are
lots
of
products
out
there
that
use
older
version
of
co2,
build
if
I
look
at
Dockers
support
it's
about
7
to
12
months
of
security
fixes,
but
since
Co
is
only
supported
for
12
months,
how
does
stalker
does
it
upgrade
Co
version
after
12
months?
How
does
it
work?
B
D
C
Because,
if
I
of
LPS,
you
know
the
one
where
you
push
patches
upstream,
which
is
super
painful
and
takes
a
lot
of
effort
and
the
one
where
you
upgrade
your
dependencies
and
the
problem
with
the
upgrading
dependencies
is
sooner
or
later,
you'll
run
into
dependency
that
you
can't
upgrade
and
maintain
backwards,
compatibility,
yeah
and
and
it'll
happen,
and
we
just
in
our
LTS
policy
or
our
practice.
We
need
to
come
up
with.
What
do
we
do
when
that
happens?
C
F
B
It's
a
important
question
because
I
think
of
galangal
supported
only
for
12
months
support
fixes
and
that's
where
you
you
know
the
slowest
one
decides
your
pace.
So
we
should
probably
discuss
what
we
should
do
in
such
cases
when
we
see
all
those
dependencies
and
which
are
beyond
our
influence
area.
Yeah.
D
And
I
think
this
this
feeds
into
you
know
our
cost
determination
of
what
it.
What
does
it
cost
to
support?
Kubernetes
4x
releases
like
right
now
we
do
3
if
extending
that
from
3
to
4.
Actually,
you
know,
we've
got
a
couple
stragglers
here.
Docker
stands
out
as
seven
months,
but
a
couple
stragglers,
but
if,
without
a
lot
of
grief,
we
could
extend
that
to
four
releases
and
still
fall
within,
you
know
the
support
windows
of
our
major
dependencies.
That
could
be
reasonable
if
extending
that
beyond
four
to
five
means
okay.
D
F
I
mean
sometimes
that
boils
down
to
the
actual
actual
thing,
so
so
I
mean
if
go
doesn't
have
that
many
security
releases
is
in
any
given
year
and
you
know
reporting
one
matches
like
relatively
straightforward.
For
example,
then
the
cost
is
not
that
high.
If
Co
releases,
you
know
dozens
of
security
fixes
every
year,
then
that's
a
you
know
huge
costs
and
I.
Don't
know
what
I.
D
Would
actually
disagree
with
that
I
would
say.
If,
if
that's
gonna
be
part
of
our
process,
then
we
need
to
maintain
our
own
Fork.
Just
all
the
time
like
what
we
don't
want
to
do
is
try
to
define
and
set
up
this
process
of
for
can
go
and
back
porting
a
thing
like
in
the
heat
of
a
security
incident,
so.
D
F
Philosophically
I
agree:
what
I'm
saying
is
practically
the
actual
cost
is
a
function
of
the
number
of
boards
you
have
to
do,
and
that
number
is
very
small.
Then
the
you
know,
philosophy
aside,
the
actual
practical
cost
is
less
than
the
practical
cost
of
the
alternative,
which
has
all
the
other
problems
which
Josh
mentioned,
which
may
be
philosophically
better
but
practically
much
more
costly.
D
D
D
A
Just
a
explicitly
loop
back
on
the
number
of
meetings
ago,
Tim
had
Tim
st.
Clair
had
kind
of
talked
about
and
drawn
a
picture.
I
think
just
shared
on
it
on
the
screen
talking
about
a
model
that
they
had
used.
I
think
this
was
in
the
University
of
Wisconsin's
Condor
project,
if
I
recall
correctly,
but
it
was
sort
of
a
fork
branch
system
of
having
a
development
stream
and
peeling
off
stable
branches
for
support
off
of
that
and
he's
started
to
flesh
out
the
document
describing
the
how
that
goes.
G
Yeah
was
I
just
wanted
to
make
it
about
what
everyone
we
were
talking
complimenting
ago
about
our
having
people
come
and
give
experience,
reports
and
I
thought
it
would
be
useful
to
just
have
us
relatively
standard
set
of
questions
that
we
can
say
how
you're
going
to
give
us
experience
report.
Please
make
sure
that
you
hit
these
that
you
hit
these
things.
That
way,
we
can
make
sure
that
everyone
who
is
giving
experience
reports
from
as
a
you
know
like
a
common
basis
for
comparison,
I
tried
it's
pretty
much
based
on
the
survey
questions.
G
B
Yeah
I
think
I,
like
the
blueprint
kind
of
kills
us
a
good
I
mean
guideline
for
someone
who
wants
it
wants
to
share
their
experience,
but
doesn't
know
which,
which
part
of
what
they
should
share
and
what
would
make
the
best
value.
So
this
is
a
good
guideline.
Probably
we
should
share
it
with
anyone.
I
think
last
meeting
we
discuss
JPMorgan
might
want
to
talk
about
their
kubernetes
experience,
so
maybe
we
can
share
a
template.
Like
this
thing,
have
you
looking
questions?
The
group
is
interested
in
questions
around
this.
B
G
I
mean
I
I'll,
try
to
put
in
the
document
yeah.
This
is
this
is
just
you
know,
idea
for
the
sole
of
the
sort
of
things
that
we
want
to
know
no
agenda
or
anything
I
mean
you
I,
think
it
I
don't
know
about
anyone
else,
but
we're
not
running
a
talk
or
presentation
or
something.
Sometimes
it's
really
nice
to
have
yeah
a
set
of
questions
that
I
need
to
answer
to
use
as
a
basis
for
energy.
In
this
lock
story.
A
If
anybody
has
comments
or
thoughts
on
that
dock,
throw
them
in
there,
it
would
be
nice
to
have
it
kind
of
like
agreed
and
then
I
can
take
the
AR
to
go
ahead
and
start
inviting
people
to
the
the
meeting
and
give
them
a
specific
slot
on
a
specific
daytime
agenda
and
have
them
committed
to
showing
up
that
day.
So
JPMorgan
Chase
we'd
had
some
discussions
with
them.
A
Quentin
had
a
contact
there
who
he'd
reached
out
to
and
we
chatted
a
bit
and
cube
con
I
talked
to
somebody
from
both
Eber
and
Airbnb,
and
they
were
potentially
willing
to
to
share
some
of
their
things.
I
think
any
of
the
companies
that
ends
up
on
the
keynote
stage
acute
con
talking
about
how
they're
deploying
kubernetes
is
is
somebody
who's.
Gonna
have
some
interesting,
learnings
and
potentially,
like
has
already
demonstrated
that
they're
they're
willing
to
share
them
if
they're
out
there
on
the
the
keynote
stage.
A
F
B
Apart
from
that,
we
have
one
of
the
meeting,
that
is
the
APAC
region
meeting
and
if
you
have
any
partners
who
want
to
be
part
of
that
meeting,
probably
we
should
share
with
vendors
who
want
to
speak
in
that
meeting
to
hear
a
different
region,
maybe
India
China,
and
it's
anywhere
in
Asia
Australia
yeah.
That.
G
G
Guess
I
forgot
to
put
it
on
there,
but
I,
don't
think
probably
most
people
who
are
on
the
call
right
now,
mine,
too
much
but
I've,
been
looking
at
trying
to
we've
got
some
requests
to
change
the
time
a
little
bit
later
for
the
non-us
time
zone
meeting
so
that
people
in
India
Annamayya
can
also
join
just
been
going
through
that
process,
with
the
did
a
lot
of
buzz
in
the
channel
feed.
My
people
are
just.
G
C
A
So
we
we
are
on
the
schedule
for
a
deep
dive
and
my
intent
is
to
do
that.
Basically,
the
same
is
the
debuff
that
we
had
of
the
contributor
summit
in
Seattle,
so
one
or
so
slides,
just
level
set
and
then
conversation
on
on
where
we
are
and
what
folks
are
thinking
and
hopefully
with
a
different
audience
there
as
well.