►
From YouTube: Kubernetes Community Meeting 20190131
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC.
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information!
A
All
right,
hello,
everyone
welcome
to
another
exciting
and
hopefully
not
too
chilly
community
meeting.
Just
so
everyone
knows
this
is
being
recorded
and
streamed
and
we
do
have
a
code
of
conduct
so
keep
in
mind,
whatever
you
say
will
be
on
the
permanent
record.
Also,
unless
you
were
speaking,
please
try
and
keep
yourself
muted.
A
So,
first
off
we're
going
to
have
a
demo
by
Adrian,
he
will
be
showing
running
Argo
workflows
across
multiple
kubernetes
clusters.
We
will
then
have
release
updates
from
sig
beard.
Aka,
Aaron,
Creek
and
Berger
will
get
sig
updates
from
Asher
and
sig
release,
and
then
it
will
be
time
for
announcements
and
shoutouts.
So
Adrian
take
er
away.
You
have
about
ten
minutes.
B
Looks
good,
alright,
hello,
everyone,
my
name
is
Adrian
Tweel
I'm,
the
founder
and
CEO
of
admiralty
royalties,
a
startup
based
in
Seattle,
and
we
focus
on
everything.
Multi
cluster
kubernetes
try
to
simplify
and
optimize
multi-cloud
stacks
that
are
built
on
top
of
cuban.
It
is
so
so
far
we've
published
three
open
source
projects:
Multi
cluster
controller,
a
extends
controller
runtime
to
make
it
possible
to
watch
and
reconcile
resources
across
clusters.
B
B
Cluster
scheduler
and
that's
what
I'm
going
to
use
today
and
it's.
It
actually
uses
the
two
previous
ones,
and
so
that
is
a
kind
of
a
if
you
want
a
Federation
at
the
pod
level.
So
with
pod
annotations,
you
can
schedule
a
pod
to
a
different
cluster
or
let
this
multi
class
of
schedule
decide
which
cluster
to
schedule
a
bottle
and
so
I'm
going
to
demo
today
with
our
go.
So
this
is
an
use
case
or
an
application
example
of
multi
class
so
scheduler.
B
If
some
of
you
download
our
go,
it's
workflow
engine
that
is
cue.
Brain
is
native.
There
is
a
workflow
CRD
and
a
controller
that
orchestrates
pods,
based
on
the
specification
of
the
workflow
or
the
manifest
of
the
workflows
then
built
on
top
of
that.
The
argo
community
has
come
up
with
continuous
delivery,
continuous
integration
in
eventing
systems,
it's
it
was
started
by
athletics
and
it's
which
was
acquired
by
into
it,
and
they
led
this
development.
B
So,
as
I
said,
the
goals
of
multi
cluster
scheduling
are
but
to
simplify
and
optimize,
so
I
want
to
simplify
workflows
or
in
general
workloads
that
you
know
would
require
you
to
run
some
steps
in
the
in
some
regions
or
some
clouds,
because
the
data
is
fragmented,
and
so
you
want
to
extract
it
from
extracted
from
different
places
or
you
need
to
call
some
managed
services
to
only
exist
in
some
clouds,
an
optimization.
If
you
run
several
clusters,
you
can't
really
take
advantage
of
efficient
impacting
between
them.
B
B
B
B
B
B
B
B
They're
both
running
good
and
my
contacts
and
then
I
use,
I
use,
aliases,
k14,
cute
control
context,
crystal
1
and
K
2
for
the
cluster
2
version
all
right.
So
let's
have
a
look
at
our
workflow,
an
Argo
workflow,
so
workflow
is
the
CR
D,
and
so
what
you
which
we
defining
in
its
spec
is
some
templates.
So
there
are
step
templates
and
and
container
templates,
and
so
step
templates
is
here.
B
B
So,
as
you
can
see,
only
three
steps
are
able
to
run
at
the
same
concurrently
because
there
is
not
enough
resources
available
in
the
in
cluster
one
to
run
everything.
So
this
is
going
to
take
a
while,
maybe
a
minute
so
I'm
going
to
show
you
how
I.
B
Yeah,
so
it's
almost
done
yeah.
It
took
more
than
a
minute.
So
now,
let's,
let's
run
the
multi
cluster
version.
B
As
you
can
see,
so
all
containers
are
created
at
the
same
time.
This
in
this.
In
this
example,
that's
because
the
proxy
pods
don't
have
resource
requests.
Oh
I
could
make
them
very
little
it's
a
proxy,
but
there's
nothing,
it
just
waits
to
be
killed,
and
so,
at
the
same
time
the
delegate
pods
are
created
by
either
in
cluster
1
and
cluster
2
and
because
there
is
more
available
resources
over
both
clusters,
so
the
job
can
be
done
faster
30
seconds
done.
B
So,
oh
yeah,
there's
a
bunch
of
nginx
pots
that
I
had
just
to
create
some
base
conveyor
like
some
use
of
the
cluster
as
a
model.
The
single
cluster
ten
steps
were
run
in
the
same
cluster
and
then
here
are
the
proxy
pots
ten,
but
only
three
actual
delegate
pots
ran
in
this
cluster,
whereas
in
cooking,
actually
a
lot
of
them
ran
in
cluster
two,
because
that
made
more
sense.
We've.
A
B
B
Have
a
quick
animation
to
show
you
what
happens
so,
the
user,
or
in
this
case
the
work
for
controller
created
a
pod
because
it
was
annotated.
It
was
transformed
into
a
proxy
pod,
then
in
the
scheduler
cluster,
which
in
this
case
is
the
same
as
cluster
1a
observation
was
created.
The
agent
of
multiprocessor
scheduler
created
the
bot
observation
in
the
other
cluster.
The
scalar
does
its
scheduling,
job
and
decides
where
to
put
the
pod,
and
then
the
agent
in
the
other
cluster
does
observer,
sees
that
decision
and
creates
the
delegate
pod.
B
That
delegate
part
is
observed
as
well
and
so
the
same
about
observationally
ated,
so
that
the
feedback
controller
can
later,
when
the
part
has
nine
job
update
the
proxy
pod
to
tell
Argo
that
the
step
has
completed
yeah
and
those
yours.
So
here's
the
roadmap
with
multi
cluster
scheduler,
there's
a
there's,
a
it's
the
beginning,
there's
still
a
lot
to
do.
Some
I
want
to
test
some
more
integrations.
I
will
require
more
features
and
some
advanced
killing
and
some
like
fine
they're
tuned,
all
back
and
all
other
features.
A
A
D
D
So
this
is
week
four
we
had
in
France
enhancements,
fries
come
and
go
and
it
was
great
if
a
little
bumpy
and
then
everybody
got
really
confused
about
what
enhancements
fries
really
meant,
and
then
we
kind
of
realized
the
docs,
maybe
weren't
super
well
liked,
and
then
we
realized
they're
like
two
different
ways
of
doing
enhancements.
Free
then
we
realize,
like
exception,
requests,
maybe
don't
necessarily
apply
out
to
Doc's.
So
I'm
gonna
share
my
screen
and
walk
us
through
a
little
bit
of
where
we
are
at
today.
D
Let's
see,
there's
that
okay,
so
if
you
ever
want
to
see
you
like
how
the
release
team
functions,
I
always
like
this
at
the
top
of
our
update.
Here
we
are
at
week
four,
which
will
always
take
you
to
the
schedule
to
tell
you
where
we
are
at
right
now
and
you
can
see
the
rest
of
the
schedule,
but
just
to
give
you
the
brief
update,
we
passed
caps
or
we
passed
the
enhancement
freeze,
so
the
next
milestone
out
of
us
is
caps.
Implementable,
Monday,
February,
4th
I'll
talk
more
about
that.
D
The
milestone
after
that
would
be
the
next
milestone.
I
think
I
wanted
to
tell
you
about,
is
burn
down,
burn
down
is
when
we
start
to
sort
of
pick
up
the
pace
of
meetings
in
collaboration,
and
we
start
to
talk
about
eliminating
bugs
focusing
on
stability.
Things
of
that
nature
burn
down,
coincidentally,
is
about
a
week,
but
for
code-free
starts
because
remember
we
got
rid
of
code
slush,
so
burn
downs.
Really
when
we're
gonna
be
brainstorming.
D
The
best
and
most
effective
means
to
share
with
the
community
to
let
them
know
that
code
freeze
is
coming,
I
feel
like
we
could
do
better
than
brace
yourself
think
harder.
So,
in
terms
of
builds,
we
cut
114
alpha
this
week.
Here's
the
issue:
if
you
want
to
see
how
that
went
and
links
to
go,
get
the
things
we're
gonna
cut
the
next
alpha
in
two
weeks
from
when
we
cut
this
one,
so
Tuesday
February
12
to
zero
type
in
terms
of
enhancements.
So
an
instance
freeze
was
Tuesday.
D
The
documents
for
enhancements
freeze
talked
about
that.
Well,
they
now
say
that
every
enhancement
that
lands
must
have
an
Associated
tracking
issue
in
the
114
milestone,
and
it
must
have
an
Associated
cap
that
is
merged
or
in
progress,
because
what
we
decided
is
everybody
got
kind
of
confused
on
what
caps
are
and
and
how
do
we
use
them?
That
will
take
an
extra
week
to
try
and
get
caps
to
implementable
status
by
Monday
February,
4th
so
of
the
caps
that
we
have
right
now
that
did
get
EPS
merged
with
implementable
status,
which
means
everybody's
agreed.
D
Yes,
this
looks
good.
Yes,
it's
got
a
task
planned.
Yes,
it
has
graduation
criteria,
go
forth
and
code
and
we
shall
evaluate
existed
later.
Eight
of
these
things
are
planning
to
land,
as
alpha
eight
are
planning
to
land
is
beta,
nine
are
planning
to
land
is
stable.
That
makes
for
25
total
and
that's
cool,
more
stable
than
not
and
then
Monday.
The
fourth
will
be
some
of
enhancements.
D
D
We
really
appreciate
your
patience
and
those
of
you
who've
come
out
to
slack
to
talk
about
this.
Those
of
you
who've
asked
for
clarifications.
You
know
Steven
Augustus,
who
is
the
chair
for
state
p.m.
the
cig,
that
owns
the
kept
process
in
general,
sends
out
a
great
clarifying
email,
I
think,
but
even
still,
maybe
maybe
the
email
is
still
confusing.
D
Some
people
so
like
if
you
get,
if
you
get
your
kept
merged
and
two
implementable
by
Monday,
that's
great,
if
not
you're,
going
to
have
to
file
an
exception
request,
a
lot
of
people
are
just
erring
on
the
side
of
caution
and
filing
exception
requests
anyway,
they're
great.
What
are
they?
How
do
they
even
work?
I,
don't
know
just
try
and
be
a
human
being,
and
let
us
know
that
hey
I
was
trying
to
get
this
by
enhancements,
freeze
and
here's
a
cap
and
we'll
we'll
work
with
you.
D
On
that
exception,
requests
are
going
to
become
a
lot
more
tightly
reviewed
as
we
get
closer
to
code
freeze,
as
are
your
caps,
as
are
the
test
plans
and
graduation
criteria
in
there
in
terms
of
CI
signal,
so
Maria's
putting
out
a
CI
signal
report
on
this
Google
Doc
we've
had
a
couple
things
resolved.
You've
had
a
couple
things
that
are
failing
that
are
in
fight
and
a
couple
issues
that
unfortunately,
like
still
not
quite
responded
to
so
just
trying
to
reach
out
to
the
appropriate
votes
there.
D
Maria
the
CI
signal
lead,
looks
at
a
couple
dashboard.
She
looks
at
the
release,
master
blocking
dashboard
and
the
release
master
upgrade
dashboard.
So
there's
a
lot
of
purple,
which
means
flaky,
there's
a
lot
of
green,
which
means
all
pass,
and
there
is
some
red
which
makes
me
a
little
sad.
The
upgrade
dashboard,
maybe
a
little
less
happy
by
the
way,
part
of
the
the
dashboard.
The
the
cops
AWS
job
failing
continuously
is
the
thing
we
are
aware
of.
D
It's
also
failing
on
pull
requests,
so
you
may
have
seen
an
email
from
an
elder
sent
to
kubernetes
dev,
letting
everybody
know
that
the
pull
requests
job
is
now
optional
and
non-blocking
optional,
meaning
it
does
get
triggered
for
literally
every
pull
request.
Non-Blocking
means
that
even
if
the
it's
not
gonna
block
your
PR,
if,
for
some
reason
it
does
seem
like
it's
blocking
your
PR,
you
might
be
able
to
use
a
skip
command
to
work
around
that
so
see
this
email
come
and
ask
questions
in
testing.
D
There
are
some
figs
that
already
do
this
SIG
cluster
lifecycle
is
a
great
sake
that
does
this
storage
is
a
great
sake
that
does
this
certain
that
sig
networking
also
does
this
so
I
have
a
PR
in
flight
where
we're
basically
looking
at
some
jobs
that
end
up
running
all
of
the
tests
for
all
of
the
SIG's
are
going
to
be
owned
by
sig
release
under
the
purview
of
the
DI
signal
team.
We
will
go
triage
and
find
the
appropriate
state
to
take
care
of
the
test.
D
Some
jobs
are
clearly
in
the
purview
of
specific
SIG's,
like
the
node
dub
is
probably
in
charge
of
is
probably
owned
by
Cigna.
The
cops
job
since
cops
is
a
sig
cluster
lifecycle,
sub
project,
it's
probably
a
sick
class
or
lifecycle
job,
and
so
we're
gonna
be
reaching
it.
That's
these
individual
SIG's
about
what
email
address
it's
supposed
to
appropriate,
who
should
be
on
that
email
address
and,
let's
make
sure
we
start
using
that
going
forward.
So
look
for
more
info
about
that
next
week.
D
A
C
New
button
got
stuck
hey;
we
only
have
two
sig
updates
it
would
it
make
sense
for
a
sorry
to
like
call
inaudible
to
just
have
a
quick
session
on
anybody.
Have
any
questions
about
their
caps,
because
I
see
a
lot
of
exception
reports
and
it
feels
like
this
is
one
of
the
meetings
where
all
the
right
people
are
in
the
room,
or
is
there
like
a
dedicated
meeting
for
this.
D
D
Gonna
sweep
through
and
look
for
things
that
may
have
been
kicked
out
of
the
milestone
because
they
didn't
have
a
cap
and
we
can
add
back
in
if
you've
already
gone
ahead
and
filed
an
exception
request,
and
your
reason
for
the
exception
request
is
we
didn't,
have
a
cap
and
we
do
now.
Please
look
at
it
or
our
cap
was
so
so
close,
but
it
didn't
quite
get
rich,
but
it
is
merged.
Now.
Could
you
please
look
at
it?
D
We're
gonna
be
super
lenient
about
this
and
human
beings
about
this
up
until
Monday,
but
beyond
that,
it's
gonna
be
start
to
be
a
little
bit
more
of
a
subjective
gut,
feel
call
which
is
kind
of
what
this
has
always
been
as
as
much
as
I
would
like
to
have.
The
explicit
criteria
listed
when
I
put
out
a
feel
to
a
feeler
to
previous
release
leads
well
like.
So
how
do
you
actually
handle
this
shrug
yeah.
C
D
Yeah
I
could
not
could
not
agree
more
like
I,
do
want
to
send
a
huge
thanks
to
everybody.
Who's
like
stepped
up
and
and
helped
out
I
think
everybody
has
generally
probably
good
to
set
up
a
call
Monday
about
the
exception.
So
we
can
definitely
talk
about
exceptions
in
depth
at
the
release
team
meeting
right,
which
is
regularly
scheduled
on
days,
and
you
can
come
being
the
sig
release
channel
about
this.
Okay.
C
C
F
A
E
Right
we
just
share
my
screen
out
here,
looks
good
all
right,
so
quick
update
on
sig
Asher,
so
some
things
we
did
last
cycle
last
cycle
is
really
focused
on
adding
some
additional
new
disk
feature
types
to
Azure,
so
things
from
like
ultra
SSD
standard
SSD
and
premium
files.
We
also
focused
on
implementing
a
share
availability
zones
and
cross
resource
group
nodes.
All
that
moved
from
alpha
to
beta
and
113.
A
lot
of
the
bug
fixes
that
we
were
focused
on
I
would
think
that
the
biggest
ones
were
things
around
attach
and
detach.
E
For
the
for
the
disk,
there's
also
some
updates
to
how
the
azure
manages
its
instance
metadata
and
adding
caches
there
so
plans
for
upcoming
cycles
out
of
a
tree
cloud
provider
for
Azure
target,
for
that
was
alpha.
A
lot
of
that's
been
a
lot
of
tests
have
been
completed,
for
that,
there's
also
been
a
kept
merged
for
that.
The
cluster
API
implementation.
This
actually
came
platform
9,
did
a
lot
of
work
on
the
cluster
API
provider
for
azor
and
that's
actually
moved
over
into
a
kubernetes
repo
and
target
for
that
is
alpha.
E
Cluster
autoscaler
for
measure
we're
targeting
GA
for
that
also
CSI.
Drivers
for
Azure
disk
and
files,
there's
been
a
new
repo
created
for
that
and
that's
her
alfa
vmss.
These
were
more
enhancements
and
those
ones
will
probably
may
be
delayed
due
to
internal
API
issues
and
azure,
we'll
see
where
those
land,
though
cross
resource
group
node
support
that
will
be
moved
into
beta
and
availability
zone
support
will
also
be
data.
E
E
So
how
can
users
contribute
to
signature?
One
thing
that
we
need
to
do
better,
that
we
really
haven't
done
is
labeling
stuff
with
good
first
issues,
so
we'll
be
working
on
that
to
make
sure
some
of
those
issues
get
labeled
with
a
good
first
issue,
sig,
as
your
slack
we're
very
active
in
that
group.
We
have
absurd
internal
upstream
engineers
that
hang
out
in
there
so
always
and
a
pretty
active
group.
There
there's
also
a
group
for
the
cluster
API
and
still
a
lot
of
work.
E
A
F
I
suppose
all
right,
so
this
is
a
cig
release
update
for
the
quarter,
so
things
we
did
last
cycle.
We
actually
had
our
first
presentation
like
this
last
time
around,
so
that's
actually
a
positive
we're
trying
to
be
a
little
more
organized
and
out
there
visible
related
to
that.
We've
been
doing
a
lot,
updating
our
documentation,
organizing
in
an
effort
to
get
kind
of
more
serious
beyond
just
the
given
release
team,
but
obviously
last
cycle
we
had
the
113
release.
F
We've
kind
of
talked
about
the
highlights
of
that
already
in
this
meeting,
but
shorter
than
normal
cycle
shortened
code
freeze
continued
from
before
we're
continuing
to
build
shadows
and
team
and
trying
to
keep
that
sustainable
and
also
give
some
longevity
across
releases.
So
it's
not
just
a
team
and
a
team
and
a
team
and
little
continuity
and
hopefully
that's
benefiting
Arin
in
some
of
the
ways
that
he
described
earlier,
where
we're
refining
and
building
these
processes
and
having
more
consistent
involvement
between
them
or
across
them
rather
and
also
last
cycle.
F
We
proposed
the
creation
of
the
working
group
to
discuss
support
topics
and
that
is
in
progress
plans
for
the
upcoming
cycle.
We
are
formalizing
some
sig
sub
projects
so
again
related
to
kind
of
trying
to
activate
on
areas
where
we
see
need
the
first
of
those
is
release
team
being
well
established.
We
want
to
move
beyond
just
that
point
stuff,
each
quarter
and
think
about
the
bigger
release,
engineering
processes.
F
Again,
as
Aaron
mentioned
d
flaking
CI-
and
he
mentioned
the
build
that
was
just
done-
alpha
build
trying
to
do
more
in
earlier
building
during
the
release
cycle
and
I'll
talk
a
little
bit
more
about
that
and
why
that's
important
how
these
plans
affect
you,
keps
I,
think
Aaron
just
talked
about
this.
As
well
and
we'll
be
talking
more
about
it,
so
the
release
engineering
sub
project
I
want
to
go
a
little
bit
more
detail
into
this.
F
What
it
is
so
we
sort
of
have
a
problem
I
think
practically
everybody
on
the
project
is
familiar
with
the
status
that
we
have
in
terms
of
Deb.
You
have
KK,
there's
some
build
stuff
at
feet:
CI
awesome!
We
do
this
well
right,
but
there's
also
a
thing
called
K
release
and
there's
build
things
over
there
and
that's
what
we
use
to
make
the
official
releases
and
what
this
means
is.
We
have
a
split
between
dev
and
production
and
that's
often
a
bad
thing.
F
Still
you
have
this
split,
so
you
end
up
with
kind
of
two
paths:
a
sort
of
artifacts
that
are
consumed
by
automation
and
testings,
the
point
and
and
that's
good,
but
then
we
have
artifacts
that
are
consumed
by
our
downstream
users
of
the
kubernetes
process.
A
lot
of
the
creation
here
is
manual
and
testing
is
of
the
artifacts
specifically
generated
in
this
flow
is
very
minimal.
Yes,
it's
coming
from
KK,
it's
been
tested.
F
F
So
Help
Wanted
there's
a
lot
of
room
for
work
here
and
we
need
hands
to
do
more
here.
So
this
is
sort
of
the
high
level
of
what
we
aim
to
do
as
time
goes
by
the
other
sub
project.
The
major
one
release
team
I
think
we've
talked
about
that
in
detail.
Aaron
just
gave
it
quite
a
bit
of
coverage,
but
two
big
things,
literally
big
caps
and
flaky
tests
looking
to
improve
on
both
of
those
related
cups.
F
Again,
this
is
just
a
screen
cap,
but
a
link
to
the
email
that
on
Aaron
mentioned
that
Steven
Augustus
had
sent
out
with
the
details
of
what
folks
need
to
do
basically,
today
working
on
getting
cups,
finalized
for
the
114
milestone
and
then
the
other
one
related
to
release
engineering,
there's
multiple,
three
or
four
cups
sort
of
in
flight
around
this
concept
of
artifact
generation,
packaging,
artifact
management
publication.
All
of
that
stuff,
we
just
had
a
big
meeting.
The
YouTube
recording
and
minutes
are
online
for
that
working
group
status
we're
gonna
belt.
F
Yes,
we
we've
talked
about
some
previously
I
think,
but
it's
kind
of
in
flight
and
happening
discussions
are
underway.
The
the
key
actions,
I
would
say:
there's
work
to
do
a
survey
of
users,
operators,
vendors,
understand
what
they're
looking
for
and
support
what
they
want
and
be
able
to
evaluate
that
against
what
we
do
today,
because
in
addition
to
that
dev
prod
continuum
at
the
end
of
that
is
support,
and
we
don't
want
to
be
wasting
resources
on
a
necessary
support
for
our
user
base
or
if
we
have
gaps.
F
We
want
to
try
to
improve
that
as
well
and
then
a
part
of
that
is
API
promotion
and
stabilization,
getting
more
of
the
core
towards
v1
stable,
how
you
contribute
to
sig
release,
release
engineering.
Obviously
we
have
a
need
there.
I
think
a
lot
of
people
in
our
community
have
experience
and
build
release.
Develop
sorts
of
things
could
have
some
very
applicable
skills
here
or
if
you
want
to
learn
these
things,
it's
an
opportunity.
A
G
The
google
Summer
of
Code
project
suggestions
and
offers
to
mentor
our
do
ASAP
as
soon
as
possible,
CFCF
put
in
the
application
to
Google
for
interns
for
all
their
projects,
we're
trying
to
at
least
double
our
presence
this
year.
A
lot
of
our
top
contributors
have
come
from
this
program.
If
your
sake
does
not
have
a
google
Summer
of
Code
intern
application
in
ask
ask
folks
by
I
think
all
of
our
SIG's
have
this
opportunity,
especially
as
I
hear
folks
talking
about
how
they
need
new
contributors.
So
please
take
advantage
of
this
opportunity.
G
A
A
C
So
chairs
and
colleagues,
you
should
have
received
an
email
last
year
from
Paris,
reminding
us
that
we
need
to
move
to
host
base
keys
for
hosting
all
of
your
sig
meetings
and
we're
going
back
and
checking,
and
we
had
not
done
that-
and
this
is
the
one
thing
blocking
us
from
publishing
the
public
calendar
back
on
the
website,
which
should
be
extremely
useful
for
all
of
us.
So
there's
an
issue
there.
C
A
A
G
One
more
meet
our
contributors
as
next
week.
It's
next
Wednesday.
We
are
moving
to
having
two
regular
sessions
again,
meaning
one
steering
committee
session
is
not
going
to
be
included
anymore
and
the
reason
why
that
is
is
a
steering.
Committee
is
moving
to
public
bi-weekly
meetings,
meaning
you
can
actually
go
and
do
business
with
the
steering
committee
on
the
line
and
more
information
will
be
out
to
kubernetes
they've
at
Google
Groups
comm
mailing
list.
Everybody
should
be
on
that.
G
G
Yes,
if
you
are
brand-new
and
just
landed
a
your
PR,
that
is
also
very
good
for
others
who
have
not
landed
a
PR.
So,
yes,
your
experience
does
count
and
matter
as
a
mentor.
Please
come
and
see
me
again,
Paris
on
slack
or
Paris
Pitman
at
Google
calm,
and
we
have
7:30
a.m.
Pacific
and
a
1
p.m.
Pacific
time
for
that,
thanks
y'all.
A
A
Infra
Wow
at
a
kootz
I
want
to
shout
out
I'm
gonna
butcher
it
again
at
Steve
Kuznetsov.
He
always
takes
the
time
to
fill
in
his
issues
on
the
hashtag,
cig,
testing,
weekly
college
and
ahead
of
time
and
with
great
detail
and
links.
This
makes
taking
notes
in
real
time
much
easier
when
it's
his
issues.
Thank
you.
Steve
at
Paris
wants
to
thank
mr.
Bobby
tables
for
knocking
out
a
ton
of
quote-unquote
glue
work
in
our
that's
weird.
A
It
kind
of
cut
it
off
glue
work
in
our
communication,
documentation
to
make
our
processes
more
complete
and
transparent
in
kubernetes
community
communication
and
closing
several
issues
around
them.
Thank
you
so
much
Bob
and
last
but
not
least,
at
spiff
XP
shout
out
to
Liggett
for
suggesting
the
kep
extension
to
release,
freeze
and
providing
the
language
for
for
it
and
with
that,
is
there
any
final
business
anything
anyone
else
wants
to
talk
about
because
we
are
reaching
the
end.