►
From YouTube: Kubernetes Community Meeting for 20220217
Description
Kubernetes Community Meeting for 20220217
Topics discussed:
1. Release Updates
2. Dockershim Removal Docs
3. Reliability Bar Proposal
4. K-Dev Migration Announcement
Find the link to the doc containing the meeting agenda & discussions at https://bit.ly/k8scommunity
If you have any questions about the meeting, you can reach out to the Contributor Experience SIG in Slack under #sig-contribex.
You can find more about sig-contribex at https://github.com/kubernetes/community/tree/master/sig-contributor-experience
A
Hello,
everyone
and
welcome
this
is
our
kubernetes
community
meeting
today
is
thursday
february
the
17th.
It
is
our
first
community
meeting
coming
back
since
june,
a
few
things
before
we
get
started.
This
is
a
community
meeting
that
is
live
streamed
and
will
be
posted
publicly
on
youtube.
So
please
be
mindful
that
what
you
say
is
being
recorded.
A
We
also
do
have
a
code
of
conduct
which
basically
boils
down
to.
We
should
all
be
excellent
to
each
other.
My
name
is
nigel.
I
am
a
community
manager
over
at
vmware.
I
am
here
representing
sig,
contributor
experience
and
yeah
as
we
get
started.
Just
as
a
reminder,
please
mute
yourself
when
you're,
not
speaking,
and
we
have
josh
berkus
taking
notes
for
us.
So
thank
you
so
much
for
that
jumping
right
in
we
want
to
start
with
our
release
updates.
B
Yeah,
hey
everybody,
I'm
xander,
I'm
one
of
the
the
shadows
for
the
release.
B
Team
leads
for
1.24
here,
and
current
status
is
we're
just
about
one
third
of
the
way
through
the
overall
release
cycle,
which
puts
us
at
currently
targeting
the
19th
of
april
for
the
1.24
release,
which
is
61
days
away
at
this
point
and
where
we
currently
are
for
enhancements
is
we've
got
66
enhancements
in
our
tracked
state
at
the
moment,
so
those
are
ranging
from
you
know,
large,
to
small
features
that
would
be
included
in
release
and
then
kind
of
what's
happening.
B
Right
now
is
you
can
expect,
particularly
the
communications
team
is
probably
ramping
up
on
blog
posts
and
stuff
to
highlight
some
of
the
content
that
we're
gonna
see
in
in
this
next
release,
and
then
the
next
major
milestone-
or
I
think
big
date-
is
our
code
freeze
deadline
which
is
going
to
be
on
wednesday,
the
30th
of
march,
at
which
point
all
prs
for
for
code
that
that
intend
to
make
it
into
the
release
need
to
be
merged
at
that
point,
so
yeah.
B
A
Awesome,
thank
you
so
much
xander
for
giving
us
the
update
about
124..
I
realized
I
did
not
post
the,
not
the
the
notes.
So
if
you
want
to
see
I'm
going
to
drop
that
in
the
chat
there
for
you
we're
going
to
move
right
along
to
our
first
topic
after
the
release
notes,
which
is
about
the
docker
shim
removal
that
is
coming
so
I
want
to
hand
it
over
to
cat
cosgrove
who's
going
to
be
moderating
that
discussion
and
letting
us
know
what's
going
on
there.
C
Hello,
everybody,
so
I
am
I'm
largely
here,
like
nigel,
said,
as
a
moderator.
So
if
you
have
questions-
or
you
want
to
talk
about
how
this
is
going
down,
and
why
I'm
happy
to
facilitate
that
discussion,
but
specifically
we
we
deprecated
the
docker
shim
a
couple
versions
ago
right
and
it
was
a
little
bit
more
dramatic
than
we
wanted
to.
But
in
this
release
we
are
actually
removing
the
docker
shim
and
you
you
can
go
like
pull
down
the
current.
C
I
think
the
beta
build
is
out,
or
is
this
just
another
alpha
build?
You
can
pull
it
down
and
poke
around
with
it
yourself,
but
we
on
the
coms
team
are
working
very
hard
to
make
sure
that
this
is
not
as
dramatic
as
the
deprecation
announcement
was
and
that
everybody
has
the
resources
and
the
knowledge
they
need
to
make
sure
that
this
change
happens
smoothly.
C
For
a
lot
of
you,
you,
you
probably
won't
notice
this
change,
it's
it's
not
really
as
dramatic
for
most
people
for
most
teams
for
most
cluster
admins
as
it
sounds.
It's
just
that
you
know
the
the
terminology,
and
the
conflation
of
like
docker
and
container
has
led
to
some
inaccurate
assumptions
about
what's
actually
happening
here.
C
Any
questions
you
can
put
them
in
the
slack
chat
too:
that's
that's
fine,
I'll,
just
read
it
out
for
the
recording.
If
you
don't
want
to
unmute
and
say
something.
C
C
A
D
C
No,
but
it
can
be
kind
of
difficult
to
predict
disasters
right
like
we
didn't
expect
the
deprecation
announcement
to
turn
into
a
train
wreck.
We
thought
it
would
be
fine
and
it
was
not
fine,
so
we're
putting
a
lot
of
effort
into
technical
blogs
on
like
how
to
stand
up
a
124
cluster.
C
What
do
you
need
to
do
if
you're
running
an
older
version
of
kubernetes
and
you
are
relying
on
docker,
as
well
as
the
like,
the
history
of
the
docker
shim
deprecation
and
the
the
context
around
like
why
we
needed
this
to
happen
and
what
it
will
be
like
moving
forward?
So
we
we
don't
have
any
concerns
yet,
but
you
know
things
can
be
unpredictable.
We
are.
There
are
a
lot
of
people
involved
in
this,
so
we
are
absolutely
going
to
be
keeping
an.
C
Close
eye
on
all
community
channels
to
make
sure
that
any
issues
are
addressed
as
quickly
and
efficiently
as
possible.
Nate.
E
Hello,
I'm
the
docs
124
lead
for
this.
This
release
coming
up
and
just
in
terms
of
paris's
question
about
concerns.
I
don't
think
that
we've
got
any
concerns
from
the
dock
side
of
the
docker
shim
removal,
except
that
there
are
a
more
than
more
than
the
usual
number
of
caps
that
are
affecting
the
documentation
and
we
have
the
documentation.
E
E
I've
shared
in
the
chat,
the
docker,
shim
removal
project
board,
and
there
are
some
issues
and
whatnot
relating
to
the
documentation
side
of
the
doctor,
show
removal
that
there
should
be
some
still
good
first
issues
in
there.
There
are
some
unassigned
issues
so
if
you've
got
if
you've
got
time
in
the
inclination,
we'd
welcome
contributions
for
the
documentation
team.
G
Hey
thanks
for
all
the
hard
work
on
the
docker
shim
deprecation
stuff
that
just
to
echo
the
number
one
question
that
I
get
from
people
is.
How
do
I
know
if
this
affects
me?
Not
what
do
I
do,
but
how
do
I
know-
and
so
I
think
that's
the
place
where
we
can
continue
to
message
better?
How
can
people
figure
out
if
this
matters
to
them.
C
Right
so
that
was
part
of
the
drama
we
saw
with
the
deprecation
announcement.
It
was
actually
a
pretty
significant
part
of
the
drama
with
the
deprecation
announcement.
There
was
a
lot
of
misunderstanding
about
like
what
what
this
actually
meant,
and
so
people
had
a
hard
time
figuring
out
whether
or
not
this
mattered
to
them.
But
one
of
the
blogs
that
will
come
out
around
the
removal
does
include
a
lot
of
content
about.
How
do
I
know
whether
or
not
I
need
to
be
concerned.
C
Some
of
that
content
will
come
out
before
the
release.
Some
of
it
will
come
out
on
release
day,
but
you
you're,
seeing
probably
an
outsized
comparable
to
other
enhancements
quantity
of
content
around
this
removal,
specifically
because
we
don't
want
to
risk
at
all
that
somebody
might
not
see
it.
So
some
of
the
content
might
feel
a
little
bit
repetitive
or
redundant
if
you've
read
all
of
the
other
content,
but
the
reality
is
that
most
most
people
won't.
So
we
need
to
take
as
many
angles
as
we
can
to
allay
those
fears.
C
But
yes,
that
that
is
our
primary
concern.
People
not
realizing
that
this
affects
them
or
people
thinking
that
if
it
affects
them,
but
it
doesn't
so
they
panic
but
yeah
that
that
is
our
number
one
concern
and
we
are
doing
as
much
as
we
can
to
control
that,
but
we
could
we
could
always
be
doing
more.
So
if
you
know
you
have
a
moderately
large
twitter
account,
please
feel
free
to
tweet
any
of
the
content
we
put
out
share
it
with
your
co-workers.
G
Plus
a
hundred
everybody
here,
you
should
be
shouting
this
from
the
rooftops
and
repeating
it
and
repeating
it
and
repeating
it,
and
I
don't
care
if
people
get
tired
of
it.
I
don't
want
to
hear.
Oh,
my
god,
you
blew
up
my
cluster
right
on
the
whatever
the
20th
of
april.
C
Right
and
the
reality
is
that,
like
with
for
for
most
like
managed
clusters
on
like
a
cloud
service,
unless
you
have
gone
out
of
your
way
to
use
docker
as
your
runtime
this,
this
probably
doesn't
affect
you
like
almost
almost
certain
that
it
doesn't
affect
you
if
you're
using
a
managed
service
on
some
cloud
provider.
But
so
our
targets
are,
you
know,
people
who
are
rolling
their
own
for
the
most
part
and
we're
doing
all
that
we
can.
But,
yes,
we
could
always
do
more.
C
F
Yeah,
I
just
want
to
thank
everybody
for
the
work
they've
been
doing
on
this.
It
is
as
much
docs
as
it
is
technical,
sometimes
so,
like
both
skill
sets
are
needed.
So
please
come
help
if
you
are
technical
or
a
doc's
writer,
and
I
know
that
there
are
a
couple
people
working
on
tools
right
now
for
like
docker,
shim
usage
detection
inside
clusters
and
they're
getting
held
up
in
osbos.
So
they'll
be
more
later
on
that
I
think.
C
It
is
a
ton
of
work
on
the
part
of
like
a
bunch
of
different
teams.
Have
we
have
we
ever
mobilized
like
this
many
different
teams
and
this
many
different
people
for
an
enhancement?
I
don't
think
so.
We
are
all
a
little
bit
skittish
after
how
bad
the
deprecation
got.
So
we're
we're
really.
C
Yes,
it's
a
lot
of
people
doing
a
lot
of
work
that
is
almost
over
overwhelmingly
unpaid,
because
you
know
it's
open
source,
so
we're
doing
this
for
free
to
make
sure
that
people
can
function.
So
it's
a
lot
of
a
lot
of
effort,
a
lot
of
emotional
effort
as
well.
So
any
more
questions
concerns
statements,
cat
photos
actually.
E
I
just
saw
in
the
chat
bob
brings
up
a
very
good
point.
The
docs
changes
are
a
blocker.
D
E
This
particular
release
as
well,
so
that's
even
more
encouragement
to
if
you've
got
if
you've
got
some
spare
cycles.
We'd
love
to
have
you
come
out
and
and
and
contribute
to
the
docs
project.
C
A
A
Yeah
well,
thank
you
so
much
kat
for
the
explanation
and
for
monitoring
that
discussion
and
paris
and
nate
and
tim
and
bob
and
chris
for
community
contributing
to
that
discussion.
We're
going
to
move
along
to
what
our
next
topic
is,
which
is
another
enhancement
proposal,
perhaps
not,
as
you
know,
as
big
of
a
deal
or
like
as
like,
causing
as
much
of
a
hiccup,
but
I'm
going
to
pass
it
over
to
voice
who's
going
to
talk
about
raising
the
reliability
bar.
H
Yeah,
thank
you,
and
so
I
I
wasn't
the
one
who
already
added
this
topic.
So
let
me
just
start
with
like
a
briefing
introduction
of
what
it
is,
and
I
will
I'm
assuming
that,
like
people
want
to
ask
more
questions,
so
I
wanted
to
be
more
interactive.
H
So
the
goal
that
we
want
to
achieve
with
this
with
this
proposal
is
to
ensure
that
like
reliability
is
actually
a
business
for
for
everyone
who
is
who
is
contributing
to
kubernetes.
I
know
that
like
and
it's
natural,
it's
not
it's
not
nothing
wrong.
That,
like
everyone
prefers,
like
the
shiny
feature,
work
and
stuff
like
that,
but
we
are
in
the
end.
H
We
are
like
actually
doing
this
job
for
our
users
and
reliability
is
one
of
the
things
that
our
users
are
actually
complaining
about,
and
we
are
hearing
it
a
lot.
H
So
we
want
to
ensure
that
this
kind
of
work,
this
kind
of,
I
would
say,
grungy,
work
or
like
work,
this
kind
of
work
that
most
people
don't
really
like
doing
like
ensuring
that
we
have
reliable
tests,
ensuring
that
we
are
also
investing
into
reliability,
improvements
that
or
debugging
actually
a
bunch
of
a
bunch
of
issues
that
we
either
our
users
are
facing,
or
we
are
finding
in
our
tests
and
so
on,
is
actually
also
happening.
H
H
H
H
Also,
some
kind
of
stick
mechanics,
not
just
a
carrot
mechanism,
which
is
if,
if
the
sick
will
not
be
picking
up
those
the
the
the
flakes
that
that
are
actually
the
top
flakes
we
we
might
decide
to
to
to
not
allow
them
to
to
graduate
some
of
their
some
of
their
like
enhancements
in
the
next
three
days,
but
but
yeah,
I'm,
I
think,
I'm.
This
is
mostly
the
high
level
picture,
so
I
guess
I
will
open
it
for
questions
and
concerns.
I
know.
I
Okay,
my
main
concern
is
this
proposal
feels
kind
of
predicated
on
the
idea
that
the
sigs
already
have
all
the
tools
that
they
need
to
improve
reliability
and
and
are
just
not
following
up
on
it,
and
I
don't
really
feel
like
that's
the
case.
I
feel
like,
for
example,
even
for
a
test
you
own
tracking
down
the
cause
of
a
test.
Flake
is
still
tracking
down
whether
or
not
you're
even
responsible
for
a
test.
I
So
I
definitely
applaud
the
idea
that
we
should
have
you
know
an
effort
around
this
and
a
wg
reliability
to
coordinate
it,
since
this
needs
to
bridge
both
the
release,
team
and
sig
testing.
But
it
really
feels
like
this
should
start
with
instrumentation,
making
better
tools
available
to
talking
to
sigs
about
what
sort
of
tools
they
need
in
order
to
make
their
code
more
reliable,
rather
than
starting
out
by
saying
hey.
Let's
block
features.
H
No,
I
think
that
no
one
is
saying
like
I,
as
I
mentioned
like
couple
times:
it's
not
that
like
I
would
like
to
block
features
or
anyone
would
like
to
block
features
immediately.
It's
it's.
I
would
really
like
not
to
use
it
like
in
the
ideal
world.
I
would
really
like
to
not
use
it
anywhere
anytime
and
I
I
hope
it
will
actually
be
the
case
it's
just
and
if
that
can
help
like
I'm
happy
or
we
are
happy,
we
are
discussing
that
with
with
david
and
eats,
and
others
like.
H
We
are
happy
to
start
with,
like
in
a
kind
of
dry,
run
mode
and
and
say
that
there
is
no
any
stick
at
this
point
like
we.
We
will
just
be
doing
everything
that
is
proposed,
except
from
from
blocking
anyone
in
say,
125
or
125
and
126,
and
see
how
this
goes
and
get
back
to
this
discussion.
If,
if
this
work
will
not
be
picked
up,
if
if
that
can
help,
I
think
that's
that
should
remove
the
most
contentious
point
from
from
from
this
proposal.
H
Regarding
I'm
sorry,
I
I
forgot
what
I
wanted
to
say:
oh
yeah,
regarding
the
reliability
metrics
that
you
mentioned
yes,
I
agree.
We
need
them
to
for
further
phases,
but,
like
we
all
know
that,
like
the
tests
are
discovering
a
bunch
of
issues
like
we've,
I
can
I
can
like
probably
a
lot
of
people
can
provide
like
examples
where
we
actually
had
tests
that
were
able
to
cut
to
catch
certain
bugs
that
we
then
hit
in
production.
H
We
just
didn't
find
them
because
of
flakiness,
because
there
were
so
there's
so
much
noise
and
test
and
so
on.
So
I
think
we
all
know
that
this.
H
This
will
help
a
lot
a
lot
to
improve
reliability.
So
I
just
don't
want
to
lose
time
and
wait
until
we
agree
on
different
definitions,
because
it
will
take
time
like
there
will.
Those
discussions
will
have
a
lot
of
input
like
everyone
will
have
their
input.
It
will
take
time
to
to
get
built
a
consensus,
and
I
don't
think
like
that.
It
really
affects
it
will
really
change
anything
for
this
particular
first
phase
of
test
flakiness.
J
Would
it
be
accurate
to
maybe
rephrase
that,
as
while
there
may
be
additional
ways
to
measure
reliability,
one
way
that
we
have
to
measure
reliability
today
is
our
test
flakiness,
and
we
could
start
by
trying
to
improve
the
measure
that
we
have,
in
addition
to
trying
to
gather
additional
measures.
If
somebody
is
so
inspired.
K
So
so
at
the
moment
the
ci
signal
report
reports
weekly
on
test
fake
flakiness.
J
Yes
and
and
some
sigs
actually
go
through
and
and
check
their
flaky
test
today,.
K
H
I
think
we
should
merge
those
efforts.
Yes,
I
think
it's
not
that
we
should
having
we
be
having
to.
I
think
we
should.
We
should
merge
this
merge
this
in
the
proposal.
K
Because
because
you
make
you
it's
just
that
you
make
it
sound
like
there's,
no
effort
being
put
into
tracking
flakiness
and
and
flakiness
is
how
how
we
track
issues
reliability
right
now.
So
I
think,
if
you're,
if
you're
interested
in
you
know
the
array
of
teams
that
are
working
on
this,
you
should
reach
out
to
the
to
current
and
past
ci
signal
team
members.
H
K
And,
and
do
you
do
you
have
metrics
to
back
that
claim
up
because,
because,
essentially,
essentially
when,
when
we
track
when
we
track
flakiness
at
the
moment,
we
chase
it
with
now
is
he's
there's
a
lot
that
I
agree
with
you
on.
Insofar
as
that,
when
we
go
down
to
cigs
to
track
down
flaky
issues,
we
do
have
issues
with
sigs
being
busy
with
sigs
not
being
able
to
put
the
as
josh
says,
put
the
labor
and
effort
in
to
figure
out
what
the
root
causes
are
for
and
specific
instances
of
flakiness
and
between.
K
I
would.
I
would
always
say
that,
in
terms
of
the
apex
of
expertise
in
tracking
down
flakiness,
that
that
sits
inside
sick
testing-
and
I
think
that
I'd
I'd
agree
with
josh
to
say
that,
in
order
for
us
to
expect
sig
team
members
to
chase
down
flaky
flaky
issues,
a
lot
of
work
has
been
done
to
educate
them
on
what
the
issues
are
and
educate
them
on
the
different.
K
You
know
how
to
get
to
root
cause,
whether
it's
infrastructure,
whether
it's
not
infrastructure
and
all
of
the
the
there's
a
lot
of
material
and
content
that
exists
to
assist
cigs
in
doing
that.
But
I
think
possibly
what
we
have
here
is
a
resourcing
issue,
but
that's
that's
certainly
one
side
of
it
and
then
the
other
side
of
it
is
is
that
we
we
do
need
improved
tooling,
on
our
ci
and
on
our
ci
signal.
L
D
L
Whole
problem
that
we
are
not
tracking,
that
is,
I
think
that
what
this
capability.
K
D
L
Center
only
track
flicks
on
on
the
release,
dashboard
yeah,
it's
a
problem
in
general,
so
we
and-
and
the
thing
is
we
are
not
investing
in
testing
too.
So
that
is
not
only
tracking
tests.
We
have
tool
to
track
freckles
and
that's
what
phase
one
means
you
know
right
now:
what
are
the
plates?
What
is.
L
In
we
are
still
in
this
way
of
working
off,
some
people
check
the
flags,
some
people,
one
to
the
other,
and
we
are
in
this
cycle
and
we
need
to
to
improve
because
the
the
quality
of
the
project
is
not
good.
It's
progressing.
We
are
adding
more
features
and
we
are
not
adding
more
testing.
We
can
see.
D
L
K
We
all
want
more
reliability,
but
but
I
think
what
we
need
here
is
data
to
back
up
the
assertions
that
we're
making
and
we
need
to
look
at
what
we
are
doing
and
look
at
what
we're
not
doing
so
so
there's
a
lot
of
things
that
are
being
done
with
the
ci
signal
to
team
do
is
track
flakiness
and
end-to-end
tests
and
then
you're
right
that
that
is
all
they
do.
K
But
if
we
need
to
start
tracking
other
specific
things,
then
you
know
we
need
to
list
make
a
list
of
things
that
are
not
being
tracked.
What
needs
to
be
tracked
and
then
start
writing
up
the
tools
to
track
those
things.
I
think
one
of
the
I
want
to.
I
think
one
of
the
ironic
things
that
we
have
here
is
that
on
the
weekly
we
gather
a
lot
of
data
on
end
and
tests
and
we're
not
storing
that
data
really
and
how
we
store
that.
K
But
but
there
is
like
I
mean
there
is
data
there
is
data
stored,
but
we
it's
not
very
free
and
open
in
terms
of
access
to
the
the
data
that's
stored,
so
I'd
be
happy
to
participate
in
this
effort
and
and
make
a
list
and
try
and
drive
what
needs
to
be
done
on
this.
K
But
you
know
I'd
like
to
see
what
acknowledge
what
is
being
done
and
what
isn't
being
done
list
out
those
two
things
and
if
you
talk
to
the
people
that
are
tracking
end-to-end,
test,
flakiness
and
and
speak
to
their,
you
know
speak
to
them
and
ask
them
for
their
experience
on
this.
They
will
give
you
plenty
of
ideas
on
what
tools
need
to
be
developed
and
what
data
needs
to
be
lifted
and
tracked
and
stored
so
that
we
can
figure
out
how
to
measure
reliability
and
how
to
improve
it.
J
Thanks
for
thanks
for
that
summary,
I'm
hopeful
that
we
will
see
an
update.
Well,
actually,
I'm
confident
that
we'll
see
an
update
listing
out
those
existing
efforts.
I
think.
D
J
Worth
noting
that
one
distinction
in
voitec's
proposal
for
our
proposal
is
that
it's
not
just
collecting
it
and
making
people
aware,
it's
also
trying
to
help
handle
the
situation
when
you
encounter
very
busy
cigs
of
helping
them
see
the
importance
and
priority
of
stabilizing
what
already
exists,
as
opposed
to
being
busy
on
adding
new
features.
D
J
A
large
portion
of
the
proposal
talks
about
how
to
encourage
that
shift,
which
hopefully,
will
help
you.
K
Yeah,
I
I
think
sigs
will
be
happy
to
happy
to
get
stuck
in
and
help
out
when
it's
easier
to
figure
out
root
cause
of
of
flakiness.
But
but
of
course
you
know
like
I
mean,
if
there's
other
metrics,
that
we
should
be
looking
at,
we
should
be
calling
those
out
and
and
just
developing
the
tooling
to
to
to
kind
of
shine
a
light
on
where,
where
where
we
need
to
improve.
M
Yeah-
and
can
you
hear
me-
I
can't
okay,
great
okay,
so
so
one
thing
to
add
to
the
ci
signal
stuff,
because
I'm
the
cs
signal
lead
for
this
release
and
I'm
working
on
a
couple
of
things
to
improve
at
least
the
tracking
for
the
ci
signal
boards
that
we
are
looking
at
so
sick
release,
master
and
deforming
and
so
on,
and
we
are
working
on
on
a
dashboard
which
also
then
sends
the
reports
to
to
slack
eventually-
and
this
is
still
under
discussion
and
also
the
weekly
reports-
we
are
still
under
discussion
like
how
basically,
the
format
should
be
and
so
on.
M
So
I
think
a
lot
of
a
lot
of
things
will
be
discussed
also
in
the
mid-cycle
retro.
So
I
think
it's
like
on
23rd
of
march.
So
until
then
I
will
see-
or
we
will
see
how
far
everything
goes
and
then
we
will
discuss
definitely
a
bit
more
about
this.
Yes
signal
stuff,
but
yeah
would
be
very
great
to
have
feedback
and
yeah.
A
Thanks
leo
and
thanks
for
putting
that
link
in
the
chat,
pretty
lively
discussion
about
this
one
wojciech
did
you
have
any
other
anything
else.
You
wanted
to
say
or
any
anyone
else
want
to
talk
about
this
reliability
proposal.
D
What's
the
consensus
that
you've
gotten
from
sig
leads
to
date,
and
you
know
what
like
is
it
been
the
majority
consensus?
What's
the
lay
of
the
land
look
like
there.
H
H
Okay,
I
guess
I'm
answering
nigel's
question
I
I
will
follow
up
with
with
different
folks
here,
also
and
and
try
to
move
make.
A
I
think
josh
has
captured
the
spirit
of
the
conversation
and
and
the
notes,
so
we
have
something
to
go
on
there.
Okay,
that
brings
us
to
our
next
topic
that
we
have
prepared.
We
wanted
to
chat
just
briefly
tell
you
about
the
kdev
migration
that
took
place,
so
I
want
to
hand
it
over
to
paris
for
that.
D
D
There's
pretty
much
no
actions
for
anyone
to
take
things
went
smoothly.
We
do
have
intentions
of
doing
this
for
all
of
our
sig
groups.
We
do
have
a
blocker
right
now
with
the
migration
tool
as
it
stands.
When
you
migrate,
google
people
from
google
group
to
google
group
it's
50
a
day.
Obviously
that
doesn't
go
well
for
some
of
our
community
groups
that
have
thousands
and
we
built
a
migration
tool.
D
It's
a
little
hacky
right
now
and
we
need
to
continue
to
smooth
out
that
tool
before
we
roll
it
out
to
all
of
the
all
of
our
community
groups.
But
once
we
do
all
the
leads
will
know
we'll
have
guidance
prepared,
if
not
be
able
to
do
it
for
you.
So
lots
of
good
things
with
apis
in
the
future
and
smoothing
out
our
communication
processes.
A
Awesome,
thank
you
so
much
perez.
Do
we
have
anyone
that
wants
to
call
out
a
kep?
We
don't
have
any
currently
in
the
dock,
but
if
there's
some
that
folks
want
to
bring
out
now
would
be
a
great
time
for
that
josh.
I
see
your
hand.
I
Yeah,
I
just
I
wanted
to
add
two
things
to
what
paris
is
saying.
I
One
is
that
there's
kind
of
a
minor
action
for
folks,
which
is,
if
you
own
community
documents,
particularly
google
docs,
that
are
shared
with
the
old
dev
list,
make
sure
that
you
share
them
with
the
new
dev
list
as
well
and
just
check
whenever
you're
logged
into
a
document
you
own
and
just
check
who
it
shared
with
we
have
about
a
year
to
transition
those
over,
but
increasingly
people
who
are
new
members
of
the
community
won't
have
access
to
anything
that
hasn't
been
shared.
I
So
add
that,
and
the
other
thing
I
actually
wanted
to
ask
paris
is
so
we're
talking
about
migrating.
The
other
google
groups
belonging
to
like
sigs
and
stuff
to
kubernetes.io,
correct.
D
Yeah
and
then
and
then
leads,
will
have
buckets
and
all
kinds
of
fun
stuff
for
organization
of
their
documents,
so
lots
of
cool
things.
D
Yes,
okay,
yes,
but
we
did
not
kick
anyone
out
of
the
old
list,
so
that's
it,
so
nothing
is
breaking.
The
only
people
that
will
be
affected
are
brand
new
community
members
who
are
joining
your
groups.
This
explains
something
that
I've
experienced
this
week.
Thank
you
good,
very
good,
trial
by
error.
I,
like
it.
A
Did
we
have
any
other
questions
about
the
the
k
dev
migration.
A
Okay,
now
that
you
all
have
had
a
little
bit
of
extra
time
to
think
about
it,
do
we
have
any
keps
that
we
wanted
to
call
out.
A
Okay:
okay,
if
not,
we
haven't
had
a
community
meeting
in
six
months
or
so,
and
the
activity
in
the
shadows
channel
has
been
tremendous
between
now
and
then
so,
instead
of
individually
going
through
all
of
those,
we
would
just
like
to
direct
your
attention
to
the
shout
out
channels
to
check
out
all
of
the
wonderful
things
that
people
have
been
doing
and
getting
appreciated
for
so
please,
if
you
have
a
minute
check
out
all
the
cool
stuff
and
the
shout
outs
channel.
A
That
brings
us
to
the
end
of
our
community
meeting.
Thank
you
so
much
for
being
here
and
for
all
of
the
great
feedback,
thanks
to
laura
for
all
of
the
hard
work
that
you
did,
making
sure
that
this
happened
and
we
are
planning
to
come
back
at
you
at
a
monthly
cadence
now,
so
we
will
see
you
next
month.
Thank
you
all.
So
much
for
being
here
take
care
goodbye.