►
From YouTube: Kubernetes Community Meeting 20190103
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC.
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information!
A
All
right
welcome
back
everybody
happy
new
year,
I
hope
everyone's
recharged
from
Q,
Khan
or
probably
sick.
It
looks
like
people
are
starting
to
trickle
back
into
work,
so
welcome
everybody.
We
are
going
to
start
today
with
a
demo
with
Melvin
for
open
lab.
Then
Aaron's
going
to
do
some
release
updates.
Then
we
have
three
sick
updates.
Today,
it's
gonna
be
sick,
apps
I'm,
a
Purina
first,
then
cig
UI
with
Jeff
Rasika
and
then
cig
VMware
with
mr.
Steve
Wong,
and
then
we'll
go
into
announcements
with
that.
Mr.
Melvin,
you
have
ten
minutes.
B
Alright,
so
thanks
everybody
for
the
opportunity
to
present
to
the
community
over
the
lab.
As
you
can
see,
my
name
is
Mel
the
husband,
of
course,
as
George's
sake
and
Oakland
lab.
We
lovingly
refer
to
as
a
playground
where
everyone
can
play.
So
it's
created
infrastructure
for
open
source
testing
and
open
lab
aims,
essentially
to
do
three
things
reduce
friction
for
cloud
ecosystem,
tooling,
integration,
test,
bed,
delivery
and
the
insufficiency
and
also
cross
communi
collaboration.
So
what
we
mean
in
terms
of
reducing
friction
is
primary.
B
We
want
to
become
part
of
Federation,
which
I'll
show
you
a
little
bit.
You
know
the
stuff
that
we
use
allows
people
to
not
have
to
build
out
the
doop,
doop
recuit
resources
or
make
duplicate
stuff
available
someone's
about
dedicate
a
server.
For
example,
you
got
networks
which
you
probably
both
can
work
together
versus
both
of
you
need
to
have
the
same
resources,
and
that's
just
smallest
group,
of
course,
we're
talking
about
test
beds
which
there's
a
lot
of
dedicated
servers,
a
lot
of
devices
etc
so
cross
communi
collaboration
I
mean
it
pretty.
B
Much
speaks
for
itself
right.
So
what
we
offer
is
dedicated
servers,
virtual
machines,
network
devices
as
IOT
devices
GPS
at
PJ's
containers,
of
course-
and
these
are
just
some
of
the
partners-
all
the
logos
could
fit
on
there.
Unfortunately,
but
the
configurations
are
reusable.
So,
for
example,
you
can
design
a
certain
number
of
resources
to
be
set
up
a
certain
way
and
someone
can
come
behind
you
and
use
that
configuration
versus
having
to
recreate
it
again.
B
It's
federated
and
the
footprint
is
growth
will
so,
therefore,
if
you
want
to
create
like
multiple
sites
test
stuff,
that's
you
know
going
across
the
water.
You
know
if
you're
talking
about
from
continent
to
continent,
so
there's
a
lot
of
different
options
that
you
can
do
with
that.
Just
some
of
the
highlights.
B
Over
the
past
year,
we've
been
around
for
a
year,
so
we've
been
working
primarily
with
OpenStack
gopher
cloud,
kubernetes
and
terraform,
and
this
is
just
some
of
the
highlights
so
we're
able,
for
example,
to
help
wind
sinderby
3,
which
is
volume
support,
including,
as
for
the
external
cloud
provider,
OpenStack
v2
support
have
been
for
quite
some
time.
V3
was
not
was
there's
difficulty
in
learning,
and
so
it
would
hope
that
happened.
B
The
go
SDK
see
there's
additional
projects
that
available
so
therefore
those
projects
or
that
functionality
is
also
available
to
folks
who
use
that
SDK,
which
people
who
are
using
an
external
cloud
provider
depend
on
that.
The
CI
that
we
have
we've
been
providing
it
to
the
external
cloud
provider
since
kubernetes
1.11.
B
So
getting
started
is
pretty
straightforward.
We
have
a
doc
styopa
lab
dot,
open,
lab
testing
or
website
kind
of
a
high
level
overview,
open,
lab
testing,
dot.
Org
itself,
without
that
that's
prefix,
is
essentially
just
like
our
facing
default
touch.
Point
kind
of
marketing
site
more
or
less
overlap,
start
bit
bit
not
lies.
Slash
open,
lab
start
is
where
you
can
test
off.
I'm
sorry
get
started.
So
basically
we
use
github
issue
templates
again.
If
you
just
want
to
jump
in
and
start
working
on
things
bit
that
lie.
B
/
Oakland
project
board
is
what
we
wear.
We
throw
a
bunch
of
stuff
at
we're
in
IRC,
ask
Oakland
lab
and
we
use
group
style
formalities,
so
demos
I
know
we
only
had
10
minutes,
so
I
didn't
want
to
try
to
jump
into
like
anything
live
so
I'm
just
going
to
kind
of
walk
through
some
screenshots
so
see,
I
setup
is
pretty
pretty
easy.
You
just
install
a
github
app.
You
select.
The
particular
part
repository
that
you
want
to
give
up
at
these
permissions.
B
B
So
you
can
say
a
patch
in
this
repo
depends
on
a
patch
in
another
repo
to
land
first,
and
so
it's
a
CI,
but
really
the
focus
of
is
gating.
So
don't
land
this
patch
until
that
other
patch
lands,
like
I,
said
getting
access
to
resource
is
pretty
straightforward.
You
can
just
go
visit.
This
link
here,
click
on
test
resource,
a
test
request,
and
it's
just
a
simple
form.
You
fill
out
the
real
only
cost
that
the
cost
for
using
open
lab
resources
is
essentially
you
have
to
write
a
blog
post.
B
It
has,
it
contribute
to
open
source
or
some
kind
of
way
just
so
the
information
is
provided
back
to
the
community.
You'd
have
access
to
the
resources,
there's
no
cost
other
than
your
time.
So
I
had
this
video,
hopefully
he's
gonna
play
pretty
quickly
within
this
time
frame.
It's
just
me
walking
through
using
one
of
the
test
bands
that
we
partner
with,
and
so
in
this
test
bit.
We
have
what
we
call
profiles
that
experiments
are
based
on
it.
So
profiles
are
those
shared
confuse
that
I
talked
about.
B
You
can
see,
there's
a
none
of
them
listed
here
and
you
can
use
the
typology
people
in
order
to
see
the
layout
of
the
out
when
you
create
an
experiment
in
this
particular
test
there.
It
defaults
to
using
an
open
stack
because
it's
one
of
the
fundamental
things
that
this
test
bear
offers,
but
you
have
complete
and
total
access
from
you
know
complete
root
access
to
to
the
service.
There's
also
like
kubernetes
profiles.
There's
you
know
these
massive
profiles.
You
can
see
it
like
all
these
different
nodes,
multiple
sites
connected
via
an
l2
link.
B
We
just
want
a
couple
nodes.
Just
one
node,
you
want
a
basic
tank
watching
those
together
that's
available
as
well.
You
can
create
or
experiment
profile.
Just
by
simply
going
to
this
topology
editor,
you
can
drag
and
drop
nodes
over
here,
create
multiple
sites.
Like
I
said,
it's
pretty,
you
know
low
difficulty
or
you
know
not
really
difficult
to
use,
there's
also
CLI
tools
that
are
available
so
folks
who
oughta
make
some
of
this.
B
D
B
Of
a
different
type
from
one
side
to
the
other,
I'm
just
going
to
add
a
couple
more
notes
here
in
this
video
and
I,
essentially
just
linked
them
up
so
I'm
going
to
go
ahead
and
stop
it
right
now,
since
we're
at
the
10-minute
mark.
If
anyone
has
questions
or
would
like
to
know
more,
of
course,
like
I
say
a
we
can,
we
can
talk
more
about
it
later.
A
D
Yes,
can
you
hear
me,
take
it
away
sick
beard,
hi,
I'm,
Aaron
beard,
I,
am
you
are
released,
lead
or
kubernetes
114
super
excited.
Take
the
community
on
this
fun
ride.
So,
yes,
how
many
of
you
are
enjoying
being
back
from
vacation
or
the
holidays
or
a
bunch
of
time
off,
but
this
is
this.
Is
me
right
now,
so
I
am
scrubbing
through
yeah
right
now.
Thank
you.
Thank
you
very
much.
D
So
I
don't
know
that
I
have
a
lot
to
talk
about
today,
but
I
think
I'll,
just
I'm
going
to
do
the
the
silly
thing
of
like
reading
through,
what's
already
in
the
meeting
notes,
so
you
know
I
consider
this
week
zero.
There
aren't
enough
people
back
to
really
kick
the
release
off,
but
we
did
get
the
release
team
and
we
got
all
the
leads
for
the
release
team
finalized.
D
This
you
people
in
your
back
channel
I'm,
going
to
have
so
much
to
respond
to
so
we
have
all
the
leads,
we're
looking
to
get
all
of
the
shadows
for
the
release
team,
and
my
goal
would
be
to
have
all
the
shadows,
selected
and
locked
in
place
by
the
end
of
next
week.
January
11th
I
have
a
draft
that
I
have
circulated.
A
draft
of
the
114
schedule
that
I
have
circulated
through
former
release,
leads
the
current
lease
lead,
shadows
and
cig
release
chairs
to
make
sure
that
everything
there
looks
good.
D
So
I
can
go
verify
that
those
tests
start
passing
and
whether
or
not
there
are
any
upgrade
or
downgrade
considerations
for
this
thing.
My
goal
here
isn't
so
much
to
go
through
and
bet
all
of
these
myself
as
percolator.
That
is
the
worst
my
goal
here
isn't
so
much
to
be
the
individual
who
that's
all
of
these
caps.
D
B
D
Checklist
that
requires
a
minimum
of
effort
for
me
to
review
whether
or
not
the
requirements
have
been
met.
So
my
goal
here
it
is
to
talk
about
this
at
length
in
the
same
architecture
meeting
next
week
after
the
community
meeting
and
we'll
see
how
that
meets
contact
with
reality,
I
think
that's
pretty
much
it.
If
anybody
has
any
questions,
that
would
be
a
great
time
to
ask
I.
A
Know
before
we
move
on
the
air
and
I
know,
we
did
take
an
item
to
ensure
that
we're
scheduling
caps
for
visibility
during
this
meeting
as
well
I
know
for
a
while.
We
were
just
looking
at
open
caps
and
mentioning
them
during
this
meeting
and
giving
people
a
chance
to
talk
about
that.
But
we
did
have
some
discussions
about
formalizing
that
a
bit
so
that
okay
people
get
visibility
on
their
caps
and
get
the.
D
To
use
this
as
a
forum
I
think
to
get
a
heads
up
for
the
the
larger
like
cross-cutting
changes,
but
by-and-large
like
just
I'm,
just
taking
a
look
at
what's
in
the
v1
13
milestone.
There
are
25
open
issues
in
the
enhancements
repo
right
now
and
there
are
12
of
an
issues
in
the
B
114
milestone
right
now,
so
that's
significantly
more
caps
than
we
could
cover
on
a
weekly
basis.
Sure
so
we'll
we'll
see
where
things
go
from
there.
So
we
are
early
in
the
release
cycle
now's
the
time
to
suggest
changes.
D
If
you
have
some
thoughts
or
opinions,
I
know
we
got
about
two
thirds
of
the
way
through
the
113
release
retrospective.
So
this
probably
isn't
the
only
major
change
that
I'm
gonna
be
interested
in
seeing
happen.
I
know
we
had
a
lot
of
discussion
around
D,
flaking,
kubernetes
and
tests
at
the
contributor
summit
and
throughout
cooed
con
maria
nutella.
D
D
Yeah,
okay,
so
yes,
this
is
clearly
going
to
be
a
cat.
Hurting
I
do
think.
I
have
sufficient
cat
t-shirts
to
make
sure
that
I
will
hurt
all
of
your
cats
for
you
making
sure
that
I
drink
coffee
made
from
a
percolator
whenever
I
happen
to
be
in
a
bad
mood,
maybe
I'll
even
get
steamed
up,
but
the
cappuccino.
Okay,.
D
A
D
One
other
quick
question
I:
maybe
it
doesn't
have
to
be
integration
now,
but
I
know
that
we
have
talked
in
the
past
about
setting
up
a
schedule
for
releases
ahead
of
time.
First,
to
having
people
perpetually
asked
when
the
next
point
release
is
going
to
be
cut.
Historically,
that's
been
at
the
purview
of
the
patch
release
manager,
but
I
believe
113
is
the
first
time
there's
a
patch
release.
Team
involved
and
I
found
an
issue
in
cig
release
to
propose
that
they
put
together
a
schedule
of
when.
D
A
Okay,
any
questions
for
Ann
before
we
move
on
okay,
moving
on
a
cig
update,
so
usually
what
we
do
in
contributes
as
I
set
a
schedule
for
the
rest
of
the
cycle
for
these
SIG's
that
are
going
to
be
going,
giving
their
status
updates.
During
this
meeting,
we
try
to
get
three
SIG's
a
meeting
ten
minutes
each.
The
next
three
for
next
week
are
gonna,
be
auto
scaling,
networking
and
p.m.
E
E
We
got
our
Charter
done
and
merge,
which
was
a
great
feat,
because
we've
been
working
on
it
for
months
and
things
iterated,
but
like
many
of
the
SIG's
did
right
near
the
end
before
koukin
we
got
it
merged
and
finished.
One
of
things
we
did
was
we
figured
out
what
it
would
take
to
take
cron
job
to
GA
and
I'm
gonna
talk
more
about
it
in
a
minute.
E
But
when
you
look
at
the
workloads
and
batch
api's,
it's
the
one
thing
that
sits
out
there,
that
isn't
GA,
it's
still
a
beta
and
it
has
been
for
a
very
long
time,
and
so
we
kind
of
talked
through
and
figured
out
what
would
take
to
fix
that.
We
started
something
new
called
portable
service
definitions
and
I'll
talk
about
that
more
in
a
minute,
but
we
got
that
kicked
off
got
our
first
provisional
cap
merged
for
it.
We
are
working
in
the
idea
of
application.
Controller
for
aggregate
status.
E
E
So
what
are
we
doing?
It
may
not
be
a
surprise.
We're
actually
gonna
start
executing
on
a
number
of
these
things,
either
beginning
work
or
we're
actually
looking
for
somebody
to
help
us
lead
cron,
job
2
GA,
because
a
bunch
of
controller
stuff
needs
to
be
rewritten,
and
so
we
are
actually
looking
for
volunteers
or
people
who
want
to
be
a
maintainer
and
involved
in
taking
this
to
GA.
So
I'm
actually
gonna
walk
through
each
of
these,
because
they're
kind
of
the
things
we've
got
going
right
now.
E
The
first
one
is
cron
job
2
GA,
it's
a
V
1
beta
1
API.
Still
it
has
been
for
I
think
years
now,
and
the
real
reason
behind
this
is
that
it
has
scaling
issues.
Once
you
go
past
thousands
of
cron
jobs.
There
is
a
point
at
which
things
start
to
break
down.
In
its
scalability
and
so
we're
looking
for
a
better
model
for
scaling,
especially
if
you
get
into
clusters
that
are,
you
know
thousands
of
nodes
with
thousands
and
thousands
of
cron
jobs.
E
We
want
to
make
sure
that
we
can
continue
to
scale
upwards
as
people
may
want
to,
but
we
do
believe
the
API
is
stable
and
it's
just
controller
rewriting
and
so
we're
looking
for
people
to
help
us
scale
it
up
and
interested
in
driving.
Some
of
that
work
we
do
have
people
who
can
mentor
and
guide,
but
as
far
as
dig
into
that
in
the
next
few
quarters,
we
don't
exactly
have
that,
and
so
this
would
be
a
multi
quarter,
rollout
plan.
E
G
E
We
don't
have
a
time
line.
We
have
an
idea
of
how
many
of
how
many
releases
it'll
take
two
to
three
releases
to
actually
get
this
out
once
somebody
starts,
but
as
far
as
really
starting
on
it,
we
haven't
figured
that
out
yet
because
at
first
we'll
have
the
the
new
controller
rewrite,
but
we're
only
gonna
opt
into
it
and
then
we're
gonna
want
the
ability
to
opt
out
of
it
before
it
fully
replaces
it.
We
can
go
GA.
E
Because
we're
looking
for
somebody
to
to
work
on
that
a
bunch
of
our
controller
folks
have
had
other
things
happening
over
the
last
several
months
and
continue
into
this
year,
because
they're
busy,
like
Janet
who's,
been
off
doing
these
wonderful
things
with
the
coop
guns,
they've
been
off
doing
other
things,
and
so
some
more
bodies
would
be
helpful
here.
Aaron
great
question:
have
we
brought
sig
scalability
in
no?
But
when
we
get
started
we'll
make
a
point
of
it?
You.
A
E
Got
to
be
able
to
write
code,
I
I,
don't
think
there's
any
other
requirements
other
than
normal
kubernetes
ones,
kubernetes
ones.
If
you
want
to
go
contribute.
So
if
you're
interested
my
handle
is
Matt
fraina,
it's
there
at
the
end
everywhere
pretty
much.
If
you
want
to
come,
find
me
afterwards
I'm
more
than
happy
to
answer
any
questions
or
try
and
help
get
you
looped
in
on
it.
If
somebody
wants
to
come
get
involved.
E
Alright.
The
next
thing
we
have
going
is
what
we're
calling
portable
service
definitions.
Okay,
so
here's
the
problem.
You
want
to
work
with
the
SAS
in
kubernetes
and
you
want
to
make
it
portable,
so
say:
you're
gonna
do
my
sequel
and
WordPress
right
and
my
sequel.
You
want
to
use
a
SAS.
Now
you
want
to
declaratively,
deploy
this
into
a
cluster
running
in
Google
Cloud,
and
then
you
want
to
go.
Do
this
in
Azure?
How
do
you
do
it?
E
Well
today,
you
can't,
and
so
we're
looking
to
this
problem
by
following
something
probably
similar
to
the
way
P
DS,
PVCs
and
storage
controllers
work
to
make
this
kind
of
model
work
with
different
backends
implementing
it.
This
isn't
a
replacement
for
Service
Catalog,
it's
more
of
a
user
experience
and
declarative
experience.
E
Somebody
asking
is
this
different
from
an
operator.
We
would
actually
implement
this
with
more
than
one
operator
right.
So
you'd
have
one
Operator
that
might
know
how
to
talk
to
AWS
another
one.
That
knows
how
to
talk
to
is
your,
but
they
all
share
the
same
CR
DS
and
CRS
and
close
the
loop
with
for
secret
information
secrets
following
the
same
schemas.
E
So
that
way,
when
you
declare-
and
you
get
stuff
back
no
matter
where
you
deploy
it,
you're
gonna
have
a
standardized
format,
but
many
different
controllers
can
implement
that
interface
and
that's
what
we're
looking
at
doing
here
and
that's
actually
the
reason
we
want
to
have
one
catalog.
So
many
controllers,
housed
in
many
different
places,
can
do
this
right.
E
It's
trying
to
solve
this
problem,
while
still
by
having
a
clear
interface,
but
there
can
be
lots
of
implementations
and
CR
DS
are
our
interface
and
controllers
will
be
the
implementation,
and
so
that's
one
of
the
things
we're
working
on.
We
are
looking
for
contributors,
but
we
have
a
number
of
people
who
are
interested
in
working
on
this.
The
cap
was
heavily
discussed
and
merged
near
the
end
of
last
year,
and
so
this
year
is
when
we
will
start
up
and
have
the
repo
kicked
off
and
really
get
moving
on
the
stuff.
E
Then
there's
the
application
controller
status
right
so
again:
WordPress
example,
my
sequel
as
a
stateful
set
WordPress
as
a
deployment.
How
do
you
get
roll-up
status
on
all
of
it?
We've
been
working
on
the
application
CRD
for
a
while
that
lets,
you
define
characteristics
and
have
well
here's
an
application
and
its
parts,
and
how
do
we
deal
with
that?
Including
meta
information
like
icon,
location,
stuff,
like
that
that
can
be
used
in
interfaces
and
visual
places?
But
now
how
do
you
deal
with
a
rollup
status?
E
How
do
we
get
the
status
of
all
of
the
different
thing
underneath
it
and
provide
a
roll
ups,
you
can
say
is
my
application,
including
all
of
its
parts
healthy
or
not,
and
so
we're
starting
to
get
into
this
one
as
well,
and
then
the
next
one-
and
this
is
one
that
I'll
have
implications
we're
talking
about.
How
do
we
deprecate
the
beta
api's?
E
The
workloads
api's
have
been
GA
since
1:9,
and
the
deprecation
policy
says
that
two
cycles
after
something's
been
GA,
the
betas
can
go,
I,
think
it's
two
releases
or
six
months.
It's
long
past
that
point
now,
and
so
we're
actually
talking
about
turning
off
the
ability
to
accept
the
beta
workloads
api's.
That
means
anybody
who's
doing
anything
like
I
reviewed,
a
pull
request
this
morning,
where
somebody
was
still
referring
to
the
old
extension
API
for
a
deployment
which
is
many
versions
ago,
and
you
know
actually
saying
that
that
cannot
be.
E
We
won't
accept
those
anymore
and
turning
those
off
at
the
API,
and
so
we
are
looking
at.
How
do
we
head
to
a
plan
and
I
think
the
current
proposed
plan
has
it
turned
off
in
115
with
an
optional
flag
to
re-enable
it,
and
then,
after
that,
we
can
take
that
optional
flag
off,
because
we
want
to
gracefully
do
this
for
a
while,
but
we're
looking
at
that.
E
So,
if
you're
using
the
old
versions
of
the
objects,
the
implication
is
please
update
to
the
GA,
stable
ones,
and
so
that's
kind
of
the
different
things
that
we're
working
on
and
looking
at
right
now
in
cig
apps.
How
can
you
contribute
we're
actually
looking
for
people
to
contribute
and
sling
code
for
any
of
these
areas
and
get
involved?
E
We
have
people
who
will
mentor
and
guide
and
help
out
with
these
things,
but
we
are
looking
for
people
who
are
interested
in
helping
out
with
this
stuff
and,
if
you're
interested
in
things
like
Kuby
builder
or
some
of
that
other
stuff,
that's
actually
how
we
build
the
application,
CRT
and
controller,
so
we're
reusing
things
which
gives
opportunities
to
work
with
some
of
our
sister
projects.
And
of
course,
where
can
you
find
us?
E
E
Not
every
well,
let's
see
the
deprecation
I,
don't
think
has
a
cap,
yet
we
got
to
discuss
to
deprecations
need
caps,
and
how
do
we
do
that?
The
other
things
that
cron
job
1
does
not
yet
have
a
cap
because
we
want
we
just
talked
through
recently
how
we
would
go
about
doing
it.
So
we
probably
only
now
have
enough
information
to
write
one
up.
They
will
all
have
caps
without
question.
D
E
I
want
to
say
that
cron
job
controller
keeps
everything
in
memory,
and
so
it
if
you
go
thousands
and
thousands
and
thousands
of
cron
jobs,
your
memory
just
scales
vertically
forever
and
there
that's
a
problem,
and
so
we're
looking
for
rewriting
the
controller
so
that
your
memory
usage
does
not
scale
infinitely
as
your
cron
jobs
do.
Okay,.
C
From
an
end-user
perspective
during
office
hours,
we've
actually
had
two
different
people
say
that
they've
run
into
scaling
issues
with
cron
jobs.
So
it's
not
just
a
theoretical
thing:
they've
they
had
like
40,000
cron
jobs,
and
they
were
wondering
why
why
all
of
a
sudden
they
stopped
after
20
minutes.
So
so.
E
A
C
So
I
think
we
lasted
our
update
in
August
or
late
July.
Since
then,
we
have
finished
and
merged
our
UI
charter.
Hopefully
that
is
something
that
a
lot
of
updates
will
have.
We've
done
two
releases,
the
last
of
which
was
one
10
11,
and
the
big
thing
with
that
one
was.
It
was
patching,
a
CVE
that
would
let
people
read
secrets
in
cube
system,
specifically
dashboard
related
secrets,
but
it's
something
that
was
definitely
a
hole
that
needed
to
be
plugged.
C
We
met
a
bunch
of
us
met
at
the
contributor
summit
to
go
over
roadmap
planning
for
2019.
We
had
a
lot
of
different
opinions
on
that
and
you'll
kind
of
see
where
we're
starting
to
go
in
the
next
slide
metric
server
support
for
the
most
part.
We
can
finally
start
supporting
metric
server.
Since
you
know,
heap
stirrer
went
away
a
little
while
ago.
It's
not
finalized
yet,
but
we
have
functional
code.
We
just
have
to
put
in
a
sanctioned
repo
and
get
a
container
out
that
will
work
alongside
the
dashboard
container.
C
Just
after
the
holidays,
we
had
a
big
merge
from
a
angular
migration
branch
to
our
master
branch.
So
through
most
of
2018,
a
lot
of
us
have
been
working
on
updating
our
version
of
angular
that
doesn't
necessarily
sound
that
big,
except
it
was
a
full
rewrite
of
our
entire
front
end.
So
that's
why
a
lot
of
work
hasn't
necessarily
been
public,
not
public,
but
you
know
it's.
C
It's
been
grunt
work,
not
very
glamorous
so
now
that
that's
been
merged
in
the
next
release,
we'll
be
on
a
new
version
of
angular,
and
it
will
also
change
our
versioning
schema
because
it
is
such
a
large
change.
And,
lastly,
we
had
an
annual
survey
of
dashboard
users
where
we
send
a
pull
out
to
everyone
and
anyone
that
is
interested
in
telling
us
how
they
use
the
dashboard
and
what
they
think
of
it.
C
All
the
relevant
links
to
these
things
are
in
the
slides,
it's
just
kind
of
a
lot
to
go
over
in
a
short
amount
of
time.
So,
if
you
were
interested,
please
click
the
links,
I
believe
the
slides
are
linked
in
the
agenda
next
slide,
please.
So
what
we're
doing?
In
first
quarter,
ish
of
2019
we're
going
to
take
the
metric
server
code
that
we've
written
to
support
metric
server
and
actually
roll
it
into
our
next
release.
This
is
a
an
entirely
short
term
solution.
Long
term.
C
We
are
looking
to
be
able
to
support
Prometheus
and
other
metrics
backends,
but
for
now
we
need
to
have
something
to
actually
have
our
nice
shiny
charts.
So
this
is
what
it
is
hoping
to
get
Prometheus
and
other
metrics
support
done,
probably
quarter
three
of
2019,
but
we
have
to
see
where
things
shake
out.
We
are
changing.
Our
versioning
schema,
like
I,
said
so
we're
going
to
start
with
a
fresh
2.0
release.
C
Originally
dashboard
was
tied
to
the
client,
go
versioning
schema,
so
if,
for
example,
we're
on
dashboard
v1
dot
10,
that
means
we
support.
Client
go
1.10,
so
we're
a
little
bit
behind
we're
going
to
leapfrog
and
2.0
is
going
to
be
version.
Client
go
1.13.
We
will
document
which
four
versions
of
Clank
Oh
our
dependent
we're
dependent
on,
but
because
it's
such
a
large
rewrite
of
the
front
end.
It
made
more
sense
to
go
with
a
fresh
versioning
schema
and
hopefully,
by
the
end
of
quarter
1
2019.
C
We
will
have
better
o
auth
support,
so
a
lot
of
issues
that
we've
been
seeing
with
people
dealing
with
authenticating
to
the
dashboard
go
away.
So
how
can
you
contribute?
We
have
a
bunch
of
good
first
issues.
That
link
will
point
you
to
all
of
those
labels.
There
is
currently
a
checklist
with
the
what's
left
before
we
do
an
angular
migration
release
and
we
would
love
to
hear
additional
features
that
people
are
interested
in.
One
of
the
big
ones.
C
F
Can
you
guys
see
that
looking
good
okay,
what
we
did
last
cycle?
Three
major
things
were
working
on
moving
the
cloud
provider
from
entry
two
out
of
three,
so
this
this
this
early
release,
an
alpha
release
was
occurred
on
December
10,
just
before
cube
con
Seattle.
We
also
in
conjunction
with
that
came
out
with
the
initial
release
of
the
CSI
provider
for
vSphere
storage
and
the
slide
deck
is
linked
in
the
notes
and
these
links
are
live.
If
you
want
to
see
more
about
that
and
how
to
use
it.
F
Third,
we
did
a
release
of
the
cluster
API
provider
for
vSphere,
so
this
is
in
conjunction
with
work
for
the
with
work
underway
under
say,
clustered
lifecycle
for
enabling
the
cluster
management
API
to
provision
the
underlay
for
kubernetes
clusters
by
the
way,
I'll
give
a
reference
to
a
good
presentation
on
this
whole
subject
that
had
some
demos
that
occurred
at
Q
con.
It
was
with
Chris,
Nova
and
lo
Quinn,
and
that
video
in
Dec
is
published
from
the
cube
con
Seattle
event.
F
F
We
want
to
get
full
parity
for
what's
in
the
entry
cloud
provider
right
now,
the
support
for
zones
didn't
make
it
in
the
initial
release
of
the
auditory.
With
regard
to
CSI
that
the
CSI
effort
itself
is
moving,
they've
got
work
underway
to
add
features
like
snapshot,
and
we
intend
to
track
that
going
forward.
F
F
It's
an
alpha
feature,
but
we
recognize
that
there
is
an
opportunity
for
enhanced
indent
testing
to
take
place
using
that
API.
How
do
these?
How
does
what's
going
on
in
the
VM
we're
saying
affect
you?
I
did
meet
with
a
person
at
cube
con
and
the
subject
came
up
that
they
would
love
to
have
licensing
to
enable
testing
mini
cube
with
VMware's
fusion
and
workstation
call.
It
I
call
them
laptop
hypervisors
as
opposed
to
data
center
Enterprise
hypervisors,
but
we
started
discussions
on
how
we
can
enable
that
into
the
kubernetes
CICP
life
cycle
and
I'll.
F
Just
give
a
word
out
that
if
anyone
runs
into
a
situation
where
there
is
this
licensing
issue
where
the
open-source
world
might
intersect,
some
commercial
components
get
in
touch
with
people
in
the
VMware
cig,
and
we
can
make
things
happen.
Should
that
ever
be
an
issue,
and
we
can
also
probably
provide
resources
for
any
efforts
related
to
Anton
testing.
F
We've
even
got
some
things
available
like
simulators
that
might
be
able
to
run
in
any
test.
Suites
running
in
public
clouds,
so
on
the
sub-project
mini
cube,
we're
working
on
support
for
fusion
and
workstation,
and
we
want
to
get
this
into
the
mini
cube,
see
ICD
tests
with
Ardi
caps.
The
VMware
saying
doesn't
have
ownership
of
any
direct
caps
at
this
time,
but
we
are
actively
working
on
the
ones
linked
here
that
came
out
of,
say,
cluster
lifecycle
and
cig
cloud
provider,
so
the
links
are
in
the
deck
related
working
group
status.
F
We've
got
two
sub
working
groups,
one
on
the
cloud
provider
with
recurring
meetings
and
a
second
one
on
the
cluster
API.
The
I
know
the
next
meetings
are
there.
Should
you
care
to
join
and
videos
and
meeting
notes
are
posted
at
the
links?
How
can
you
contribute?
We've
got
three
Help
Wanted
bugs
these
probably
aren't
beginner
level
where
it
might
at
least
be
useful
to
have
a
little
experience
with
regard
to
what's
going
on.
F
A
Questions:
okay,
thanks
Steven,
thanks
to
all
the
sig
reps
for
doing
the
updates,
real,
quick,
let's
go
into
announcements.
First
reminder
is
cube.
Con
EU
is
coming
up
and
the
CFP
closes
on
January
18th,
that's
about
two
weeks
away,
so
welcome
back
everybody
and
there's
a
link
there
to
the
location
and
when
CFP
closes
on
when
they
announce
the
accepted
talks.
If
you
attended
the
contributor
summit,
please
fill
out
the
survey,
you
should
have
gotten
an
email.
A
So
please
search
for
contributor
summit
in
your
email.
If
you
attended,
we
do
take.
Those
survey
results
very
seriously.
So
please
please
fill
that
out.
Even
you
know,
even
if
it's
you
all
did
a
great
job,
that's
good
to
know
or
terrible
job.
That's
also
good
to
know
if
you
want
to
do
a
demo
during
the
first
10
minutes
of
this
meeting,
like
I'm
of
and
did
we're
always
looking
for
signups.
A
If
you
looked
at
the
top
of
the
notes
document,
there's
a
little
link
there
for
instructions,
you
basically
just
sign
up
and
then
I'll
go
and
assign
you
a
date
and
then
you'll
do
that.
We
we
like
to
schedule
at
least
a
few
of
them
in
advance,
so
now
that
would
say
fresh
new
year,
there's
plenty
of
slots
open
for
demos.
So
if
you're,
if
you're
working
on
something
cool,
please
check
out
that
link,
and
the
last
thing
I
wanted
to
mention
before
we
do.
Shoutouts
is
the
folks
at
cloud.
Yuga
they've
been
doing
this.
A
The
past
few
cons,
they
cross-reference
the
videos
to
everyone's
talks
and
their
slides
and
link
it
up
together
and
a
really
really
useful
list
and
I
found
this
to
be
very,
very
useful
to
catching
up
for
Q
Khan.
So
please
check
out
that
link.
It's
been
crucial
for
me
to
figure
out
what
what
talks
I
need
to
catch
up
on
and
slides
and
all
that
stuff
and
I'm
pretty
sure
they
accept
pull
requests
as
well.
A
Shoutouts,
just
real
quick
since
it's
a
new
year
just
to
explain
how
this
works.
We
have
hash,
shoutouts
and
slack.
If
you
see
someone
going
above
and
beyond,
the
call
of
duty
feel
free
to
just
toss
on
a
shout
out
for
them
in
that
Channel
and
then
we'll
read
out
their
name
during
this
community
meeting.
So
they
can
get
a
nice
thanks
from
the
community.
So
this
kind
of
covers
all
the
shout
outs
during
the
break.
A
But
Jeff
dragged
in
like
to
shout
out
to
Christoph
Blocher
for
breaking
the
godet
verification
checks
into
their
own
job,
bringing
the
rest
of
the
verified
checks,
job
down
to
41
minutes.
Jeff
Garza
would
also
like
to
shout
out
to
Fisher
Zoo
for
finally
fixing
or
generated
code
to
not
include
the
year,
thus
preventing
the
bill
from
breaking
and
needing
happy
new
year
PRS
and
there's
a
list
of
PRS
there
where
people
had
to
update
the
year
and
I.
Guess
all
that
work
goes
away,
which
is
great
and
ideal.
A
Hack
would
like
to
shout
out
to
the
Chinese
reviewer
team
of
sick
Doc's,
Chen
rui,
Adam,
Deng,
Xiao
long.
He
sorry
Peter,
Zhou
and
there's
a
blog
post
about
her
second
new
contributor
workshop
at
cube.
Con
they're,
linked
in
the
show,
notes
and
is
also
the
blog
post,
is
also
available
in
Chinese
and
I've,
linked
that
there
and
there's
a
link
to
the
pull
request.
And
that
concludes
the
shout
outs.