►
From YouTube: Kubernetes Community Meeting 20180614
Description
This is our public weekly meeting, for more information, check it out here: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
B
Good
morning
Cooper
verse
on
this
wonderful,
wonderful
Thursday
morning,
it's
10:00
a.m.
Pacific
time
or
whatever
you
watch
the
stream.
It's
that
time
on,
June
14th,
welcome
to
the
kubernetes
community
weekly
meeting
I
just
want
to
tell
everyone
before
we
get
started,
we'd
be
awesome.
If
we
could
have
when
you're,
not
speaking,
you
mute
your
microphone
so
that
we
don't
have
an
attack
of
the
keyboard
sounds
as
well
as
just
a
reminder
that
this
is
a
community
meeting
and
it's
being
publicly
recorded
right
now.
B
So
just
be
mindful
of
what
you
say,
the
Internet
is
forever.
So
for
those
who
don't
know
me,
my
name
is
Zach
Arnold
I'm,
a
software
engineer
for
a
finance
company
here
in
the
Bay
Area
called
white,
green
and
energy
fund.
It's
the
word.
Energy
spelled
backwards
and
I
found
that
out
six
months
after
I
started
working,
we
are
lovely
in
love
with
the
kubernetes
ecosystem
and
very
very
happy
fans.
So
thank
you,
two
quick
things
where
we
get
started.
We
are
15.
B
B
Shy,
which
is
that's
a
pretty
big
milestone,
those
those
numbers
are
going
up
quickly
and,
and
so
I
encourage
all
of
you.
If
you
haven't
yet
done
it,
go
over
to
youtube
and
search
for
the
kubernetes
channel
and
subscribe
and
also
thank
you
so
much.
The
AWS
team
you've
launched
eks
and
made
a
lot
of
our
lives
easier
this
week
and
we're
all
very
excited
to
welcome
them
to
the
managed
provider
hosters.
So
without
further
ado,
let's
jump
into
the
take
a
look
at
this
agenda
topic.
We
are
going
to
start
with
our
demo.
B
C
Okay,
cool
yeah,
so
my
name
is
Priya
I
work
at
Google
on
the
container
tools,
team
and
I'm
gonna
be
talking
about.
Can
it
go
today?
So
what
is?
Can
it
go?
Kanaka
was
basically
a
tool
used
to
build
images
in
a
kubernetes
cluster.
So
right
now
before
kenick,
oh
I,
guess
the
way
a
lot
of
people
were
building
images
and
their
clusters
was
by
mounting
and
docker
sockets
or
using
other
tools
which
maybe
didn't
provide
the
full
generality
of
photography
file.
C
So
how
does
Kanaka
work
so
Kanaka
is
basically
distributed
as
an
image.
It's
an
image
built
from
scratch,
and
it
contains
a
few
files,
including
some
credential
helpers,
which
are
used
to
push
two
different
registries
right
now.
I
think
we
have
like
a
GCR
credential
helper
and
one
for
Amazon
ECR,
along
with
some
files
needed
for
authentication,
but
the
main
thing
in
the
Kennecott
image
is
an
executor
binary
which
basically
does
the
entire
process
of
building
the
image
and
then
pushing
it
at
the
very
end
and
the
way
that
mechanic
Oh
does.
C
This
is
I
first
parsing,
the
docker
file
that
you
want
to
build.
It
extracts
the
base
image
of
the
docker
file
to
root
within
the
container,
and
so
the
base
image
is
the
image
and
the
from
line
of
your
docker
file,
and
it
takes
an
initial
snapshot
of
what
your
base
image
looks
like
and
stores
on
a
memory,
and
then
Kanaka
will
basically
go
through
every
command
in
your
docker
file
executing
that
command.
C
You
are
taking
a
snapshot
after
each
execution
to
look
for
any
files
that
may
have
changed
in
your
file
system
and
any
files
that
are
different
will
be
appended
as
a
new
layer
onto
your
base
image
and
in
this
manner,
can
it
go
basically,
I
will
execute
each
command,
take
a
snapshot
and
then
append
Oh
if
any
files
have
changed.
That's
like
building
your
entire
image
upon
your
based
image
that
you
specified
in
your
doctored,
file
and
multistage
builds
are
really
similar.
C
Basically,
it
can
occur,
will
extract
the
base
image
file
system,
execute
the
commands
and
then
delete
the
the
file
in
which
it
extracted
running
through
the
different
stages
of
your
docker
file
and
building
your
final
image,
great
and
so
sample,
pods
I,
animal
for
running
Kanika
and
kubernetes.
In
this
example.
C
Basically,
all
you
need
is
to
run
the
Kanaka
image,
which
is
specified
right
here
and
some
oh
whoops,
sorry
and
some
arguments,
so
you
can
just
specify
a
path
to
your
docker
file
and
Kanaka
accepts
two
different
contexts
right
now,
so
you
can
specify
a
local
directory
which
is
with
a
different
flag
as
your
build
context
for
your
image
build
or
a
JCS
bucket
to
use
the
GCS
bucket.
Basically,
you
create
a
tarball
of
your
bills,
context
and
upload
it
to
the
bucket
and
then
can
it
go.
C
We
can
take
a
look
at
the
docker
file,
we're
trying
to
build,
which
is
pretty
simple.
It's
just
a
Debian
image
and
we
install
make
and
copy
over
a
file
foo
and
so
look
at
the
pods
back.
It's
really
similar
to
what
I
just
showed
in
the
slide.
I
specify
a
bucket
where
my
those
contacts,
fireball
lives
and
a
destination
to
push
to,
and
then
the
required
authentication
is
mounted
in
as
well.
C
Tenneco
also
finds
a
list
of
mounted
directories
and
ignores
those
during
snapshotting,
because
we
don't
want
these
surgeries
to
end
up
in
our
final
image.
So
basically
it
just
goes
through
and
starts
executing
our
commands
here.
We're
installing
make
and
then
taking
a
snapshot
of
the
file
system,
and
then
we
copy
over
a
file
foo
and
take
a
snapshot
of
that
file
over
here
and
then
and
then
Kanaka
we'll
push
the
image
at
the
very
end.
So
we
can
pull
this
image.
C
Yeah,
and
if
you
run
this
image,
we
should
be
able
to
see
the
file
foo,
which
we
copy
it
over
and
make
should
be
installed,
which
is
basically
a
really
simple
case
for
can't
go
in
communities
to
build
and
push
an
image
and
then
for
additional
security
boundary.
You
can
also
run
can't
go
in
G
visor
now,
I
only
have
it.
C
B
B
C
B
B
C
B
B
B
D
Everybody
from
dr.
Kahn's,
so
the
were
in
the
middle
of
code,
freeze
a
little
past,
the
middle
code
freeze,
actually,
we've
got
less
than
a
week
of
code.
Freeze
left
the
plan
is
to
lift
code
freeze
on
Tuesday
and
branch
and
rc1
on
Wednesday
and
as
far
as
I
know,
you
know
and
as
things
are
looking
now
that
will
happen
on
time,
barring
unexpected
developments.
D
D
Change,
even
if
it's
not
a
trackable
feature,
you
need
to
do
that.
Asap
contact,
cig
Doc's,
if
you
need
help
CI
signal,
is
good.
We've
got
a
couple
of
tests
being
flaky.
Alpha
features
are
being
flaky
if
you're
actually
in
charge
of
an
alpha
feature
in
this
release,
you
may
be
hearing
from
ice
or
from
test
infra
today,
I.
If
it
turns
out
to
be
your
feature
that
is
causing
the
flakiness
they're
currently
trying
to
track
that
down,
there's
only
a
couple
of
issues
and
only
handful
of
pr's
open.
D
G
The
team
is
becoming
more
of
a
concern
that
we
need
to
have
some
visibility
that
are
so
we're
looking
to
add
that
and
we're
in
need
of
somebody
who
might
be
a
volunteer
for
that
role
and
then
also
the
branch
manager
role
is
open.
Currently
and
probably
maybe
two
there.
Tomorrow,
I
will
post
a
potential
drafted
very
drafty
schedule
for
discussion
around
112
and
and
that's
about
it,
we're
kind
of
trying
to
get
this
ball
rolling
a
little
bit
earlier
and
progress
smoothly
from
111
to
112
fear,
interest
and
volunteering
for
the
team.
B
Thanks,
Tim
and
Josh
anything
else
I
am
going
to
assume
by
the
pause.
No,
it
looks
like
one
ten-four
we're
not
about
eight
days
ago,
so
just
Pat
release
update
Jason,
so
anything
that
you
wanted
to
mention
is
110.
Release
lead
I,
know
well,
good,
okay,
so
it
looks
like
one
time
for
is
just
another
awesome
stability
update
which
puts
us
right
along
into
the
cap
of
the
week.
B
If
you
don't
know
what
the
were
kept
means,
it
stands
for
a
kubernetes
enhancement
proposal
and
there's
plenty
of
information,
actually
there's
a
plethora
of
information
that
could
be
found
inside
of
lots
of
the
kubernetes
repos,
with
all
of
the
lovely
information
on
caps
most
of
the
community
Rico.
So
andrew
Seiken
is
planning
on
talking
to
us
about
these
foster
fighter
care
did
I
pronounce
right,
I'm,
so
sorry
did
I
pronounce
it
wrong.
Yeah.
A
No,
it's
good
say
it.
One
more
time,
sorry
see
Kim,
it
just
doesn't
matter.
It's
like
you
can
buy
like
him.
I
yeah
thanks
so
yeah
I'm,
Andrew
I'm,
a
software
engineer
at
digitalocean
and
one
of
the
the
chairs
for
a
provider
so
to
provide
some
background
on
safe
cloud
provider.
It
was
created
last
week,
so
we're
very
new
and
prior
to
that
we
were.
A
So,
with
that
in
mind,
on
the
sig
has
kind
of
three
main
focuses
right
now:
one
is
kind
of
creating
and
formalizing
a
onboarding
process
for
new
providers.
That
kind
of
want
to
support
do
the
best
they
can
and
also
kind
of
setting
kind
of
a
certain
level
of
technical
excellence
that
we
can
kind
of
expect
from
from
all
those
providers.
The
second
is
is
improving
documentation
around
providers.
A
We
understand
that
right
now,
cloud
provider,
documentation
is
not
great
there's
a
lot
of
features
that
we
kind
of
integrate
within
kubernetes
will
cut
with
different
cloud
providers,
but
how
those
features
work
and
how
to
operate
with
those
features
is
kind
of
unclear.
So
we
want
to
improve
that,
and
we
also
want
to
improve
testing,
which
is
what
this
cap
is
about.
We
want
to
get
to
a
point
where
every
cloud
provider
is
active
about
testing
the
latest
versions
of
kubernetes
publishing,
publishing
those
results
somewhere.
A
We're
stick,
testing
and
sig
release
can
kind
of
consume
those
and
kind
of
act
on
those
quickly
and
so
about
the
cap.
So
the
cap
is
I'm
really
short,
and
it's
just
really
outlining
what
I
just
said.
It's
a
puzzle
that
talks
about
why
we
want
to
conform
incest
reported
by
all
the
top
providers,
and
it
talks
about
like
how
the
cloud
providers
can
actually
go
about
doing
that.
A
So
this
is
something
that
the
working
group
previously
wanted
to
do,
but
for
the
longest
time
we
couldn't
do
it
I
think
we
talked
about
it
maybe
like
a
year
ago,
but
we
couldn't
do
it
because
we
didn't
have
actually
that
the
formal
way
of
the
cap
to
actually
propose
this,
where
we
can
get
all
the
interested
parties
involved.
We
also
didn't
have
so
kind
of
the
amazing
work
that
sig
testing
has
done
around
testing
infrastructure.
A
We
didn't
have
access
to
that
that
wasn't
available
back
then,
and
so
now,
with
testing
with
sig
testing
and
creating
test
grid,
and
they
have
kind
of
a
more
formalized
process
for
actually
running
tests
and
reporting
those
tests
somewhere.
We
can
actually
have
concrete
instructions
for
providers
to
follow
to
actually
report
those
those
results
and
that's.
The
OpenStack
has
also
been
doing
a
lot
of
work
on
leading
the
charge
on
actually
testing
this
out
kind
of
fixing,
all
the
bugs
long
they've.
A
Actually
that
a
pretty
good
example
of
how
to
actually
run
these
tests
uploading
to
test
grid,
and
so
we
can
kind
of
kind
of
take
their
lead
and
follow
their
lead,
follow
their
lead
on
running
conformance
tests
on
a
regular
basis
and
then
reporting
those
two
to
sig
testing
and
seek
release
so
going
forward.
We're
gonna
be
pushing
for.
We
want
all
providers
to
do
this
eventually,
and
so
we
understand
this
is
a
pretty
big
effort
requires
a
lot
of
time.
A
I
mean
this
is
because
caps
are
meant
to
be
iterative,
so
we're
supposed
to
kind
of.
As
long
as
we
agree
on
the
general
direction
of
the
cap,
we
understand
a
lot
of
like
important
things
missing,
but
we're
gonna
address
those
kind
of
in
the
future,
as
we
gain
more
experience
around.
How
did
how
did
you
conformance
test
and
how
to
upload
those
so
I
think
that's
it
for
me,
you
guys
have
any
questions
around
the
cap.
You.
G
This
is
one
of
the
motivations
getting
results,
because
that
will
lead
to
improving
coverage,
but
then
explicitly
have
improving
coverages
out
of
scope.
Do
you
see
maybe
a
follow
on
kept
then
once
things
are
flowing
to
encourage
coverage
increase
because
the
the
conformers
coverage
is
pretty
low
today,.
A
Yeah,
yes,
like
you,
said
out
of
scope
today,
but
definitely
the
future
I
can
see
us
kind
of
working
with
suggesting
to
improve
that
coverage,
but
like
to
ask
providers
to
increase
coverage.
It's
not
something
that
we
think
everyone
is
on
board
to
do.
Oh,
like
they're,
not
willing
to
kind
of
put
that
as
a
priority.
So
that's
kind
of
where
we
stand
right
now.
B
If
you
wanted
to
find
out
just
a
little
bit
more
about
about,
what's
going
on
inside,
of
the
the
kept
space
as
far
as
what
projects
are
being
proposed,
which
ones
are
actually
in
progress
and
lovely
I
didn't
even
know
that
could
have
had
a
Trello
leg
board,
that's
kind
of
Awesome,
so
I
I
personally
enjoyed
taking
away
event
moving
on
to
sig
updates,
so
we're
going
to
start
with
sig
Windows
with
one
mr.
Patrick
Lang
talking
about
kubernetes
features
to
the
windows
release
so
Patrick.
Take
it
away
for
us,
hey.
E
Yeah
good
morning
yeah,
so
I'm
Patrick,
Lang
I
happened
to
work
over
at
Microsoft,
but
but
anyway,
Michael
Michael
who's,
also
on
the
call
and
I
are,
are
the
two
chairs
for
sig
Windows
and
so
I'm
going
to
go
ahead
and
share
out
some
content.
Real
quick
here.
Just
so
you
I'm
talking
about
all
right.
Okay,
see
that
okay
Zak
I
see
it
all
right,
good,
all
right.
E
So
one
of
the
things
that
we've
been
doing
some
a
bit
unique
with
Windows
is
that
release
over
release
on
Windows
we're
releasing
at
two
times
per
year
now
as
part
of
what
we
call
the
Windows
Server
semi-annual
channel.
But
basically,
we've
got
these
date
coded
releases
like
1709
and
1803,
which
just
went
out,
and
you
know
one
of
the
things
that
we've
discovered
is
as
we're
getting
to
certain
features
in
kubernetes.
E
You
know
the
newer
version
of
Windows
here.
So
if
you've
got
anybody,
that's
you
know
basically
trying
to
see
you
know
what
the
best
releases
of
Windows
is
to
use
based
on
the
features
they
need.
We've
got
that
in
this
board
here,
and
this
is
something
that
we're
gonna
have
you
know
maintained
basically
a
release
release
until
we
get
up
to
the
point
that
we're
looking
to
graduate
out
of
beta
and
go
to
to
GA.
E
So
that's
something
that's
a
little
bit
different
if
anybody's
got
other
recommendations
of
what
better
ways
to
get
this
information
out.
I'd
love
to
hear
it,
you
know,
drop
drop
me
a
line
in
slack,
but
I
guess
you
know
the
key
thing
is
you
know
if
you're
working
on
building
clusters,
with
both
Linux
and
Windows
in
their
best
practices,
always
you
know,
use
the
latest
release
there,
because
we're
adding
so
much
stuff
there
and
in
Kubb
1.11
were
had
a
nice
big
list
of
features
there.
E
In
terms
of
you
know
getting
some
of
the
windows
specific
stuff
for
security
contexts
done
finally
implementing
cubelet
stats,
so
that
was
way
overdue,
but
it's
there
now,
so
that
should
be
label.
Since
things
have
been
missing
like
horizontal
pod,
auto
scaling,
and
then
you
know,
the
main
areas
that
were
that
were
hard
at
work
on
now
are
mostly
focused
around
actually
finishing
out
the
rest
of
what's
needed
in
the
in
the
ecosystem,
and
so
a
couple
examples
there
are
things
like
today.
E
All
the
all
the
development
we're
doing
is
still
using
docker
shim
for
the
CRI
implementation,
and
so
we've
got
teams
that
are
you're
contributing
to
sig
windows
and
other
other
projects
like
container
D
and
also
cryos
under
evaluation.
But
you
know,
as
a
group,
we're
we're
figuring
out
which
path
we
want
to
go
forward
between
those
are
possibly
both.
E
So
that
way
we
can
better
align
with
the
deprecation
plan
for
for
docker
Shem,
as
that
comes
along,
and
then
we've
also
got
got
members
that
are
working
in
projects
like
like
ovn,
kubernetes,
flannel,
calico
and
then
some
of
the
cloud
provider
specific
plugins,
like
our
azure
sea,
and
I
plugin
that
we
use
on
Azure
but
I,
know
that
I've
had
pings
from
other
cloud
providers
as
well,
so
basically
getting
the
rest
of
those
pieces
together.
They
are
needed
to
really
build
a
complete
production.
E
Deployment
is
kind
of
you
know
the
main.
The
main
thing
that
we're
working
on
now
and
then
so.
The
last
big
topic
that
I
wanted
cover
is
that
we've
got
a
team.
The
sig
that's
been
working
on
getting
support
for
the
node
and
and
in
conformance
tests,
so
they're
making
a
really
good
progress
there.
The
pass
rate
is
something
around
70%
right
now,
a
large
part
of,
what's
actually
failing,
is
tests
that
are
taking
dependencies
on
Linux
specific
features,
rather
than
things
that
are
actually
part
of
the
coop
functionality.
E
So
over
the
the
next
you
know
a
couple
weeks
to
month,
I'm
going
to
be
taking
a
number
of
PRS
in
for
things
like
prowl
to
be
able
to
go
ahead
and
kick
off
the
Windows
test
automatically.
We've
we're
gonna
have
a
PR
to
coop
test
to
go
ahead
and
schedule
things
on
clusters
that
contain
both
Linux
and
Windows
nodes
in
them.
So
that
way
we
get
that
additional
coverage
up
there
on
test
grid,
and
so
we
want
to
have
those
numbers
coming
out
regularly.
E
So
we
can
use
that
as
one
of
the
benchmarks
for
for
be
able
to
graduate
from
beta
to
general
availability.
So
we're
hoping
to
you
know,
be
able
to
show
that
quality
and
that
consistency
therefore
probably
looks
like
most
likely
version,
12
or
13,
which
is
going
to
line
up
with
the
Windows
Server
2000
19
release
that
comes
out
this
fall
and
the
thing
that's
important
about
that
release
is
that's
one
that
the
the
to
release
as
I
mentioned
that
were
part
of
the
semiannual
ones.
Those
are
short
term
releases.
E
They
only
have
an
18
months.
Support
cycle
from
Microsoft
2019
is
going
to
be.
You
know
the
five
years
minimum
plus
there's
a
separate,
extended
support
cycle
there.
So
we
want
to
make
sure
that
that's
the
one
that
that
Kubb
is
ready
for
for
GA
on
you
know
towards
the
end
of
this
year,
so
I
think
those
are
all
the
main
updates
I
wanted
to
give.
E
Let's
see,
there
is
a
couple
couple
questions
here,
so
go
ahead,
question
around
unix-style,
symlinks,
and
so
the
answer
is
that
is,
we've
got
something:
that's
like
a
symlink
and
we've
got
some
fence
like
a
hard
link.
So
the
one
that's
like
a
symlink
basically
has
a
reparse
point
and
says
you
know,
go
to
this
other
file,
and
so
basically,
in
that
case
we
have
to
make
sure
that
the
path
is
canonicalized
on
the
host
in
before
that
file.
E
Then,
in
an
insecure
way,
because
the
paths
are
always
canonicalized
within
the
scope
of
the
file
system,
namespace
for
that
container,
there's
also
hard
links
possible,
but
in
general
I
wouldn't
I
want
to
really
recommend
them.
It's
there,
they're
kind
of
difficult
to
create
and
and
a
lot
of
yeah
I
just
wanted
to
I
can
might
have
focus
on
using
the
the
sim
likes
that
that
we
do
and
we've
got
an
example.
E
If
you
go
look
at
the
cubelet
code
for
Windows,
for
what
lays
out
the
files
for
config
maps
and
in
secrets
and
I,
don't
know
whether
or
not
the
code
to
make
the
same
links
is
actually
right
there
in
the
cubelet
or
if
it's
in
the
go
when
IO
library
it's
in
one
place,
the
other,
but
anyway
there's
code
out
there.
Let
me
show
you
how
to
do
that
and
then
the
okay
there
was
a
question
from
from
Bob
is
Windows.
Presently
dr.
E
E
Pretty
frequently
I
think
the
current
version
is
something
like
17,
no
6
c13,
it's
just
over
there
on
the
docker
site,
but
that's
the
one
that
we're
using
for
all
the
testing
out
windows
today,
once
we've
got
container
dr
cryo
up
that
may
change,
but
but
we're
also,
you
know,
of
course,
doctors
looking
at
kubernetes
support,
so
there's
a
chance
that
that
the
right
container
d
build
may
just
be
packaged
alongside
the
docker.
He
builds
within
the
same
installer.
B
I
Okay,
Bri
anybody
doesn't
know
me
I'm
Katherine,
somewhere
than
co-chairs,
along
with
an
on
denim
master
gaps
and
I'm
gonna.
Give
you
an
update
day
of
what
we've
been
up
to
lately
so
I
believe
Matt
spoke
the
last
community
meeting
about
helm,
graduating
to
its
own
CN
CF
sub
project
for
anybody
who's
not
aware
of
that.
That's
really
a
big
thing
for
the
home
community
and
they're
super
excited
about
it.
That
being
said,
it
seems
like
he'll
maintain
errs,
are
still
participating
in
sig
apps
and
that's
going
to
continue
in
his
role
so
cheer
there.
I
Since
the
last
time,
we
gave
an
update
helm
to
attend
a
stability
release,
it's
basically
break
fix
and
yeah.
Just
general
stability
matches
the
helm,
three
proposals
merged
and
work
as
in
progress
moving
forward
on
that
in
the
world
of
applications.
So
we
have
a
sig
sponsored
project
for
the
application,
CR
D,
which
really
seeks
to
kind
of
describe
an
application
instance
as
it's
running
in
a
cluster.
We
plan
to
in
the
next
quarter
implement
a
controller
for
that
we're
discussing
moving
the
CRV
resource
to
date
to
beta
and
contributions.
I
Our
work
are
very
welcome,
so
still
a
work
in
progress.
As
far
as
the
app
depth
working
group,
it
seems
that
we're
going
to
wind
that
down
and
the
proposal
that
came
out
of
the
working
group
for
well-known
labels
and
annotations,
we
will
merge
in
a
partial
form
in
the
near
future.
We
finally
got
approval
on
that
we're
just
waiting
on
tech
review
as
far
as
I
can
tell
for
case
on
it.
Brian's
cut
a
new
release.
There.
Ten
point:
oh
two:
it's
now
available,
you
can
go
and
check
it
out.
I
Some
cool
new
features
for
charts.
The
big
news
is,
but
their
polls
will
open
for
moving
helmet
charge
to
a
decentralized
repo,
the
centralized
repository
model
that
we've
got
right
now
for
incubator
and
stable.
Just
really
isn't
scaling
very
well.
So
we
want
to
break
that
up
and
go
decentralized
in
scaffold.
I
World
customized
support
has
just
been
added
and
it'll
be
in
the
next
release
for
our
Charter
we're
still
waiting
on
approval
from
the
Syrian
Committee,
but
we
hope
to
get
a
charter
version
and
then
for
some
other
general
work
in
progress
on
the
workload
API.
You
want
to
focus
on
promotion
of
the
batch
API.
I
Basically,
the
only
thing
left
on
the
workloads,
API
and
it's
still
beta
is
cron
job,
but
prior
to
promoting
cron
job,
the
GA
we
want
to
make
sure
job
is
stable.
First
right,
like
cron
job,
creates
many
jobs
if
job
is
not
doing
what
we
would
like
to
do.
That
could
be
problematic
for
everybody's
cluster.
So,
in
order
to
maintain
cluster
of
stability,
we
want
to
do
a
little
bit
of
work
there
before
promoting
project,
and
then
we
have
a
rather
interesting
cap.
That's
been
proposed
first-class
support
of
sidecars.
I
This
is
something
that,
if
we
move
forward
with
we're,
probably
going
to
have
to
move
forward
with,
in
conjunction
with
sig
note
and
probably
something
that
we
would
want
to
take
to
sig
architecture,
but
the
general
feedback
that
we've
been
getting
from
the
community
thus
far
seems
like
to
be
very
supportive
of
the
idea.
That
being
said,
this
cap
really
needs
some
work
in
terms
of
fleshing
out
the
actual
technical
details
of
the
implementation,
and
that's
basically
it
for
the
update.
Any
questions,
comments,
feedback.
B
J
I
Helm
is
a
project,
that's
actually
a
program
right.
It's
software
charts
are
kind
of
an
artifact
that
helm
uses
to
deploy
something
right.
There
are
maintained,
errs
for
charts,
but
I
don't,
and
maybe
it
falls
under
helm,
goodness
at
some
point,
but
it
doesn't
seem
to
be
moving
as
an
officially
cncs-supported
project.
So.
E
J
Yeah
well,
okay,
okay,
I,
would
still
I
would
still
stick
to
that
perspective.
It's
I
think
a
little
bit
confusing
and
a
whole
bunch
of
charts
are
still
being
managed
in
the
kubernetes
community
where,
when
the
codes
moved
out
is,
but
it's
not
the
intention
or
is
that
just
something
that's
being
worked
out
so.
I
I
think
math
arena
or
Matt
butcher
would
be
a
would
be
better
people
to
ask
about
that.
I
contribute
to
some
of
the
charts,
but
I'm,
not
a
charted
to
maintain
it.
Helm
is
still
working
on
what
its
governance
model
is
going
to
be
I
believe.
Last
week,
Matt
butcher
got
an
initial
proposal
out
for
their
model,
so
I'm
not
really
sure
what
their
plan
is.
The
the
decentralization
of
charts
for
posit
or
ease
is
really
in
support
of
organizations
having
their
own
internal
repositories
and
being
able
to
also
talk
to
like
public
repositories.
J
K
So
I'm
in
a
shared
space,
I
think
you
know
the
you
know,
helm
charts
were
managed
under
the
same
governance
as
helm,
with
the
same
group
of
people
as
a
sub-project
to
say
gaps
if
helms
moving
out,
but
the
charts
are
not,
then
our
home
charts
now
a
sub-project
of
say,
gaps,
who's,
leading
that
projects.
What's
the
government's
around
that
I
think
it's
just
getting.
Some
clarity
here
would
be
helpful.
I
K
So
so
just
be
clear:
it's
not
a
graduation.
This
is
not
a
sort
of
standard
path.
We
don't
expect
everybody
to
follow
in
this
path.
I
think
what
happened
here
is
that
helm
decided
to
exit
kubernetes
and
then
separately
apply
to
the
CNC
F,
and
that's
not
something
that
we
expect
to
happen.
You
know
for
every
sub
project
here,
so
this
is
not
an
expected
path
forward.
So
I
think
we
got
to
be
careful
about.
You
know.
K
You
know
how
we
signal
that
to
folks
in
the
larger
community,
but
and
also
just
because
there's
an
owner's
file
doesn't
mean
that's
the
sub
project.
I.
Think
there's
still
this
issue
around.
You
know
who's
curating,
these
things
who's
approving
these
who's
deciding
when
we
actually
take
on
new
charts,
you
know
establish
those
new
owners
files
I,
think
that's
something
that
if
this
stuff
is
gonna,
get
a
maintain
in
at
kubernetes
repo
under
under.
K
If
the
gaps
then
see,
gaps
has
to
have
some
governance
around
that
and
view
that
as
a
separable
sub
project,
I
find
it
really
surprising
that
the
charts
didn't
go
with
helm.
You
know
as
as
they
as
they
moved
out
now.
Another
option
here
is
to
say,
charts
no
longer
belong,
part
of
the
kubernetes
organization
and
github,
and
then
say
each
chart
is
now
on
its
own
to
actually
maintain
its
own
processes.
K
I
As
it
stands,
right
now,
charts
is
listed
still
as
a
sub
project
inside
of
sig
apps,
with
the
top-level
owners
file
for
the
charts
each
individual
chart
has
its
own
owners
and
maintained
errs
because
having
the
top-level
owners
file
that
consisted
of
several
maintained
errs
from
various
companies
and
sig
apps
became
two
burdens
lead
to
high
quality
charts.
That
is
the
current
status
of
the
governance
surrounding
it.
It.
K
Well
so
one
of
the
things
I'm
looking
at
cigs
DMO
right
now
and
there's
a
pointer
to
charts
tooling,
which
is
code
for
actually
tooling
charts
that
actually
part
of
the
chart
sub-project
or
is
that
part
of
hell?
Because
it
was
part
of
the
kubernetes
helm
organization,
which
is
know
which
I
apparently
soon
got
renamed.
As
part
of
this.
K
Well,
if
you
look
in
6d
animal
it,
it's
listed
as
part
of
this
chart
sub-project
and
it
points
to
kubernetes
home
I
I'm,
not
saying
like,
like
I,
think
we
just
need
to
think
this
through
get
some
clarity
here
and
and
create
some
some
clean
lines
of
separation
between
helm,
charts
and
in
helm
and
I.
Think
this
is
also
going
to
be
interesting,
as
you
know,
as
home
continues
to
evolve
as
a
project.
How
does
that
actually
impact
the
Helms
chart?
Sub-Project?
That's
pretty
Cooper
Nettie's
for.
I
K
I
I
think
in
my
mind,
the
right
thing
to
actually
do
is
to
let
the
hell
maintainer
and
the
charts
maintainer
x'
work
towards
a
model
that
works
for
them,
because,
primarily
I
don't
see.
This
is
particularly
interesting.
Helm,
chart
consumers
so
much
as
people
who
are
actively
maintaining
them
they're
the
ones
who
are
gonna
be
most
affected
by
the
government
surrounding
the
charts
and
would
be
most
affected
by
the
ownership.
I
K
B
Okay,
well,
it
sounded
like
an
awesome
discussion.
Definitely
so
the
needs
to
get
figured
out.
Thank
you
guys
so
much
for
having
it.
For
the
last
thing
it
looks
like
sig
dogs
is
going
to
be
out
this
week.
I
saw
Jen
Rondo.
She
was
on
the
call
Jen
if
you
wanted
to
give
us
a
duck's
update
the
free
tubes.
Otherwise,
when
she's,
okay,
sir,
it's
just
like
letting
you
know
now,
I.
D
H
Can
possibly
rotate
sigdoc
Xion
for
something
more
comprehensive
in
a
little
bit,
and
the
biggest
news
for
the
community
as
a
whole
is
that
we
are
making
really
excellent
progress
on
issues
against
the
Hugo
migration
and
those
issues.
There
are
still
some
outstanding
issues
and
there
will
continue
to
be
broken
bits
for
a
while
yet,
but
we
have
plowed
through
a
ton
of
stuff,
with
many
things
to
a
huge
community
that
stepped
up
to
help.
H
There's
been
all
kinds
of
first-time
contributors.
Helping
us
do
this,
so
that's
been
really
pretty
awesome
and
we
are
also
working
like
mad.
You
sort
of
backing
up
misty
and
as
release
Meister
for
111
and,
along
with
Josh,
appreciate
everybody's
diligence
about
their
111
PRS
and
happy
to
help.
If
anybody
needs
help
and
that's
all
I
got
thanks.
B
All
right
well
with
that,
that
concludes
sig
updates
or
on
to
announcements
and
shout
outs,
and
all
the
goodness
like
that
and
we'll
try
to
wrap
this
up
here
in
the
next
five
minutes
and
get
you
guys
back
to
your
day.
So
it
looks
like
kubernetes
office
hours
is
next
week.
It
looks
like
Wednesday
yep,
so
Wednesday
next
week.
B
So
definitely
do
that.
I
actually
might
try
to
do
that
myself.
So
there's
a
link
inside
of
the
notes
and
you
can
go
over
there
too,
to
it
obviously
paints.
It
contributes
on
slack
and
capable
information
that
way.
Sig
leads
if
you
have
not
uploaded
your
meeting
videos
to
the
YouTube
channel
recently,
please
try
to
catch
up
ping
George.
If
you
need
some
help
uploading
or
accessing
your
recordings.
B
This
helps
us
to
keep
sort
of
a
universal
system
record
of
what's
going
on
as
an
open
community
and
obviously
so
that
people
who
miss
the
individual
sig
meetings
have
the
opportunity
to
catch
up
on
on
what
they
missed.
So
moving
on
to
shout
outs,
josh
burkas
was
Jordan
and
oh
I'm,
gonna
butcher
this
name
so
I'm
going
to
skip
right
over
I'm.
So
so
sorry,
this
is
username
is
at
dims
for
pitching
and
of
doing
a
ton
of
work
on
PRS
for
kubernetes
111
the
release
across
the
entirety
of
the
codebase.
B
Thank
you
guys,
so
so
much
gen
Rondo
was
calling
out
misty
and
thanking
her
for
stepping
up
in
all
the
things
of
Docs,
no
matter
how
crazy
they
get
or
how
much
she
has
my
plate.
So
thank
you.
So
much
misty
Asia
called
out
mr.
Augustus
for
for
giving
us
a
huge
head
start
and
hurting
all
the
cats
to
get
a
stellar
112
release
team
already
in
place.
I
think
we're
all
really
excited
to
work
with
Tim
pepper
as
release
lead
for
112,
so
that'll
be
a
lot
of
fun.
B
Misty
and
H
were
thinking
Josh
for
hurting
all
of
the
111
released
cats
and
a
specifically
said
huge
shout
out
for
being
an
awesome
and
patient
leader
throughout
the
111
cycle.
It
was
such
a
learning
experience,
seeing
him
watch
or
work
through
issues
calmly,
all
the
while
encouraging
the
release
team
to
lead
a
little
bit
in
our
own
way.
B
L
L
B
Terrific,
so
with
that
that
it
wraps
up
our
meeting
a
little
bit
as
scheduled.
So
thanks
so
much
guys
for
for
hanging
out
with
me
in
this
wonderful
installment
of
the
kubernetes
community
meeting.
You
guys
are
all
awesome.
People
and
I
hope
you
have
an
incredible
incredible
week.
We'll
see
you
a
week
from
today.