►
From YouTube: Kubernetes Community Meeting 20180308
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
Notes: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
And
happy
International,
Women's
Day
everyone
and
welcome
to
your
weekly
kubernetes
community
meeting
I'll,
be
your
host.
Today,
Jorge
Castro
we've
got
an
action-packed
uh-huh,
meaning
for
you
today,
we're
gonna
start
with
a
cute
flow
demo.
Some
release
updates
the
graph
of
the
week
with
Zack
and
then
two
sig
updates.
Today
we
went
from
cig
ABS
and
one
from
sig
OpenStack.
So
if
you
would,
please
give
Jeremy
your
full
attention
and,
let's
start
with
the
demo,
okay.
B
Great
well,
thank
us!
Thank
you,
everybody
for
you
know
coming
to,
or
thank
you
for
this
opportunity
to
show
you
and
talk
to
you
a
little
bit
briefly
about
coop
flow,
so
just
by
way
of
a
quick
background
with
coop
flow.
What
we're
trying
to
do
is
we
think
that
from
Google
we've
learned
that
ml
is
really
sort
of
a
distributed
systems
problem,
in
the
sense
that,
if
you're
trying
to
run
ml
at
scale
and
in
production,
you
have
a
lot
of
different
components
that
you
need
to
run.
B
So
as
an
example,
he
might
learn
to
run
Jupiter
for
to
give
your
data
scientists
a
tool,
an
environment
that
they
want
to
use.
You
might
need
to
run
1002
run,
distributed
tensor
flow
jobs.
You
might
need
trained
models,
you
might
need
to
use
tensor
flow
serving
to
actually
serve
your
models.
You
might
want
to
use
a
workflow
engine
like
Argo
or
airflow,
to
write,
complex
workflows.
You
might
want
to
use
a
framework
like
beam
or
spark
to
do
data
processing
and
batch
inference.
B
We're
just
trying
to
make
it
super
easy
to
deploy
an
ml
stack
consisting
of
the
components
that
we
think
a
lot
of
ml
practitioners
and
data
scientists
want
to
use,
and
so
you
know
what
we
give
you
right
now
is
we
make
it
really
easy
to
spin
up
Jupiter
hug
so
that
you
can
you
get
a
an
IDE
that
sort
of
friendly
for
data
scientists
and
there's
standard
notebook
environment.
We
give
you
10
0
CR
D,
which
makes
easy
to
manage
and
run
distributed
in
tensorflow
jobs,
and
then
we
we
also
provide
some.
B
Syntactic
sugar
for
deploying
tensorflow
models
in
the
form
of
casein
ekam
components
packages.
Then
we
also
have
packaging
to
deploy
off
all
these
components
using
Koopa
kasanete
and
we
included
a
bunch
of
other
components
such
as
Argo
for
doing
workflows,
Seldon
for
deploying
non
tensorflow
models
and
we're
constantly
adding
more,
and
so
the
the
first
thing
that
we
do
is
we
make
it
super
easy
using
case
Annette
to
deploy
this
entire
stack.
And
so
what
we're
showing
you
here
is.
B
B
You
know
they're
there
on
Prem
cluster
and
we
want
to
make
it
really
easy
for
them
to
only
change
the
couple
parameters
that
they
care
about
when
moving
between
those
two
environments.
So
as
an
example,
when
you
move
from
running
locally
to
running
in
the
cloud
you
Mon
don't
want,
you
might
want
to
scale
out
horizontally
and
add
more
nodes
when
you
train
and
you
might
want
to
also
use
more
resources
like
GPUs,
and
so
we
think
that
casein,
that's
a
great
answer
for
that.
B
B
Okay,
so
what
we've
done
is:
we've
deployed
a
coupe
lo
and
I've
got
couplet
running,
so
that
was
what
I
ran.
Those
commands
that
I
just
showed,
and
so
now
what
I've
got
is
running
on
my
kubernetes
cluster
is
Jupiter
hub
and
so
I've
used
two
pa'dar
hub
just
to
spin
up
a
kubernetes
notebook,
and
the
nice
thing
about
using
kubernetes
Fortran
Jupiter
Rob
is
that
we
can
take
up
kubernetes,
take
advantage
of
kubernetes
to
do
all
the
resource
scheduling.
B
When
you
spawn
a
notebook,
say
I
want
so
many
CPUs
I
want
so
many
GPUs
and
under
the
hood
we
do
all
the
scheduling
with
using
kubernetes,
and
so
we
basically
have
all
the
commands
that
a
user
would
want
to
run
and
then
most
of
the
I'm
skipping
over
most
of
like
the
just
basic
cluster
set
up
or
sort
of
new
notebook
set
up.
And
then
this
is
the
interesting
part
right.
B
So
here
what
we
have
is
we
have
a
case
net
component
that
we've
defined
for
defining
our
or
tensorflow
custom
resources,
and
so
we
can
run
it
and
we
can
submit
a
job.
And
so
the
idea
here
is
that
we
have
a
custom
tensorflow
resource
that
specifies
the
things
that
allows
people
give
us
people
a
higher
level
api
for
running
for
submitting
tensorflow
jobs
and
in
particular
distributed
trends
or
flow
jobs
right.
So,
if
you
run
distributed
tensorflow,
you
have
n
processes
that
you
have
to
manage.
B
That
can
get
quite
complicated
if
you're
trying
to
use
the
built-in
kubernetes
resources
and
you
have
to
manage
multiple
job
controllers
or
multiple
pods
or
multiple
deployments.
So
instead
we
built
a
custom
resource
that
custom
resource
is
going
to
manage
all
the
different
k8
resources
that
you
actually
need,
and
so
all
the
user
has
to
specify
is
the
things
they
care
about,
such
as
the
you
know,
their
code,
their
parameters
etc.
B
But
we
explicitly
give
you
the
full
power
of
like
the
kubernetes
api,
so
you
can
take
advantage
of
like
pod
api's
to
do
things
like
attach
volumes,
credentials,
etc,
and
so
after
you
do
that,
so
you
can
submit
the
job.
That's
what
I've
shown
here.
So
you
can
basically
see
the
job
and
it's
submitted
by
case
net
and
running
on
your
cluster.
B
B
Sorry,
the
the
Zoom
widget
it's
in
the
way
of
mine
yeah,
so
one
of
the
problems
that
we've
solved
is
we've
included,
Ambassador
as
a
reverse
proxy
and
that
allows
us
to
dynamically
add
routes
to
our
ingress.
So
in
this
case
we're
serving
tens.
Our
board
at
you
know
tens
our
board,
/name
of
my
model
and
those
routes
get
added
dynamically
so
that
we
can
because
a
lot
FML,
a
lot
of
the
services
that
we're
going
to
bring
up
are
going
to
be
ephemeral.
B
A
B
A
Okay,
awesome
great
thanks
moving
on
just
a
quick
reminder
to
everybody
that
this
meeting
is
being
streamed,
live
on,
YouTube
and
being
recorded
for
the
public
record.
Also,
anybody
wants
to
help
me
out.
Taking
notes
during
these
presentations
would
be
really
appreciated.
Some
release
updates
Jace
du
marché
can't
make
it
today,
so
I've
asked
them
to
put
all
the
release
information
here
in
the
notes.
Most
importantly,
it's
basically,
you
have
two
weeks
left
and
code.
Freeze
will
end
on
March
14th
at
6
p.m.
Pacific
time
and
all
the
information
that
you
need.
A
E
D
D
We
don't
we
don't
say
service
level
agreement,
because
agreement
is
a
scary
word,
but
we
say
service
level
objective
in
sig
Docs,
for
how
long
should
we
take?
What
is
our
expected
response
time
between
when
some
first
opens
an
issue
or
a
PR,
and
when
that
PR
receives
a
response?
The
first
time
that
it
is
touched
by
person
in
sick
docks
and
what
we're
looking
at
right
now
is
we're
thinking
at
the
the
the
average
week
view
over
the
past
year,
and
you
can
look
at
July
1st.
D
That
was
shortly
after
I
started,
and
you
can
see
that
there's
a
really
big
spike.
That's
coming
close
to
two
weeks
right
at
the
week
of
seven
nine
and
I
started
wondering.
Why
is
that?
What's
the
risk,
the
reason
for
that
spike?
And
then,
of
course,
if
you
realize
that
that's
the
week
view
that
includes
the
4th
of
July
holiday
and
the
the
long
weekend
associated
with
it.
D
So
that
was
the
explanation
there
and
then
it
tapers
off
nicely
and
then,
if
you
look
again
shortly
after
release,
1.9
there's
another
large
spike
in
the
week,
and
that
is
representative
and
inclusive
of
the
United
States
all
day.
So
our
response
times
in
general
are
pretty
good,
they're
less
than
on
average
four
days
between
when
a
PR
is
open
and
when
it
is
first
touched.
If
you
look
over
the
course
of
the
past
year,
so
Paris
I'm
going
to
ask
that
you
switch
from
the
week
period
to
the
day
period.
D
Thank
you
because
this
there
is
some
interesting.
It's
just
slightly
more
granular
here
and
if
you
look
short
ly
before
the
release
of
1.9,
there's
that
giant
spike
that
heads
towards
almost
five
and
a
half
weeks
and
I
was
very
concerned
when
I
initially
saw
this
because
I
that
did
not
correlate
meaningfully
with
the
time
shown
in
the
week
average.
So
I
looked
at
those
specific
dates
and
if
you
look
at
those
dates
for
those
spikes
that
is
December
2nd
through
December,
5th
and
I
was
thinking
what
was
going
on.
D
But
if
you
look
in
Paris,
if
you
don't
mind
switching
back
to
the
week,
you
please,
if
you
look
at
the
average
weekly
view
since
the
first
of
the
year,
you'll
notice
that
that
average
time
has
shrunk
considerably
in
some
cases
in
most
cases
less
than
one
day,
and
what
that
is
tied
to
specifically
is
the
implementation
of
prowl
in
kubernetes
website
repository.
So
our
much
lower
average
times
to
PR
touches
is
really
thanks
to
to
the
proud
team
and
to
test
infra
and
Aaron
Creek
and
Berger
in
particular,
for
for
helping
crowd.
D
Go
live
in
kubernetes
website
so
right
now
our
service
level
objective
for
PR
touches
is
one
week.
But,
as
you
can
see,
that's
that's
a
far.
That's
excessive
in
comparison
to
what
our
actual
performance
is.
So
after
1.10
I'd
like
to
revisit
that
time
with
cig
Docs
and
give
genuine
consideration
to
shrinking
that
to
two
days,
because
right
now
the
evidence
that
we
have
supports
that
and
it's
the
be
the
only
compelling
argument
I
can
see
against
it-
is
that
there
are
weekends
inclusive
of
that.
D
So
this
this
is
I,
guess
part
of
the
discussion
includes,
which,
which
period
on
this
graph
we
look
at
a
weekly
average
I
think
is
very
helpful,
but
a
day
I
think
would
show
sort
of
spiky
spiky
times
related
to
weekends,
where
PR
touches
the
time
to
PR
touch
rose
over
three
business
days
or
three
days,
inclusive
of
weekends.
So
this
is
a
really
valuable
tool
because
it
gives
us
actual
actual
data
about
which
to
base
our
dibbs
and
on
which
to
revise
them.
A
F
F
Basically,
our
main
focus
is
to
just
discuss
how
people
are
defining
and
running
applications
on
top
of
kubernetes,
whether
that
be
business,
applications
or
extensions,
kubernetes
controllers,
and
so
one
of
the
things
one
of
the
parts
of
that
is
the
communities
core
code
that
we
own,
which
is
the
workloads
API.
So
things
like
deployments
stateful
sets
daemon
sets
all
that
good
stuff
batch
API
like
the
jobs
and
cron
jobs,
so
we're
continuously
working
on
improving
those
and
driving
more
features
into
those
and
then
the
other
side
of
the
spectrum.
F
F
It's
kind
of
you
know
most
users
of
kubernetes
of
running
applications
on
kubernetes,
so
it's
kind
of
a
natural
place
for
them
to
come
in
and
start
discussing
things
and
then
that
kind
of
starts
off
their
journey
into
into
the
contributing
more
to
the
kubernetes
community.
So
we
kind
of
liked
to
help
help
users
get
more
involved
where
they
can.
F
So
we
lost
lost
kind
of
at
the
end
of
the
last
year
we
had
a
1.9
retrospective.
So
thanks
to
Jayce
for
coming
and
leaning
us
frutos,
so
one
of
the
so
to
kind
of
good
things
that
came
out
of
1.9
was
like.
We
really
doubled
down
on
some
of
the
flaky
tests.
So
so
now
we
have
a
lot
less
flaky
tests,
which
is
always
a
good
thing.
F
So
if
there's
a,
if
there's
some
sort
of
tuning
that
we
can
provide
help
people
test
their
their
application
tools
on
on
newer
versions
of
communities
before
they're
released,
that's
something
that
we'd
want
to
work.
What
work
towards,
because
what
happens
right
now
is
tends
to
be
a
lag
of
about
a
week
or
two
before
tools
catch
up
to
the
latest
versions
with
kubernetes.
F
F
So
you
can
read,
you
can
read
all
the
notes
from
fondos
sessions,
so
there's
there's
a
repository
in
the
kubernetes,
helm
or
gear
helm
summit
notes
where
you
can
can
see
all
the
discussions
that
went
on
there
and
all
sessions
have
been
recorded
and
are
on
this
youtube
link,
which
is
also
nice,
slides,
addy,
slides
by
the
way,
are
in
the
community
meeting
notes
you
can
find
those
links
there
and
then
Taylor
Thomas,
one
of
the
commentators
of
helm,
a
summary
of
how
to
help
some
that
went.
So
you
can
get
that
read
as
well.
F
So
so
then,
on
how
invoke
helm,
v3
stuff,
so
planning
kind
of
started
at
the
helm
summit
and
that's
kind
of
been
going
on.
So
one
of
the
things
that
we've
been
putting
together
is
user
profiles
or
personas
to
try
and
help
drive
some
of
the
features
that
are
going
to
be
developed
as
part
of
Humphrey,
and
some
of
those
proposals
are
being
evaluated
and
stuff
on
a
chart
side
of
things,
and
so
we
have
some
stats
now.
So
thanks.
F
And
then
one
of
the
things
I'm
working
on
is
a
best
practice
practices
document
so
that
we
can
point
people
to
various
things.
You
know
just
how
you
go
and
implement
our
back
in
a
chart,
or
this
is
how
you
go
and
implement
to
system
volume,
claims
and
stuff
like
that,
so
that
we
can
have
like
a
standard
standard
place
where
we
can.
F
In
order
best
practices
for
writing
charts,
so
a
couple
months
ago
we
had
a
working
group
spin
off
from
from
Sega
apps
called
the
application
definition
working
group,
and
the
idea
was
to
essentially
come
up
with
a
declarative
way
to
manage
applications
on
top
of
Cuba
Nettie's
and
one
of
the
things
that
has
come
from
this.
Is
this:
this
application
CID,
which
started
as
a
as
a
kept,
and
the
idea
was
to
essentially
group
together
some
metadata
about
an
application
and
also
the
group
resources
that
an
application
has
as
part
of
it.
F
Alongside
with
this,
which
could
do
things
like
have
application
level,
health
checks,
adding
our
references
to
the
resources
so
that
they
can
be
garbage
collected
and
all
of
that
development
is
going
to
happen
out
of
this
SIG
hosted
repository.
So
it's
apps
application.
The
link
is
I,
think
is
here
so,
and
one
of
the
things
that
we're
doing
is
trying
out
this
new
cig
hosted
repo
process.
F
Cid
and
proposals
around
that
and
again
developments
going
to
happen
out
of
that
cig
hosted
repo
run
in
some
of
the
stuff
that
came
from
that
retrospective
things,
like
you
know,
improving
the
usage
of
client
go
and
providing
more
feedback
there,
so,
hopefully
working
more
with
API
machinery
on
some
of
that
stuff,
coming
up
with
some
sort
of
documentation
or
a
tool
to
to
make
it
easier
for
people
to
test
releases
to
communities
that
haven't
been
released
yet
with
their
with
the
applications.
They're
building.
F
So
the
last
thing
I
wanted
to
mention
was
that
will
be
at
cue,
Connie
you.
So
it's
gonna
be
an
intro
session
which
is
kind
of
being
kind
of
going
to
be
a
bit
like
this,
but
it
longer
will
go
through
kind
of
all
the
different
applications
that
are
sort
of
different
tools
that
are
under
the
sea
gaps
belt
and
in
more
detail
and
also
have
demos
all
the
different
things.
H
One
day
we
are
not
migrating
demon
set
to
the
default
scheduler
in
1:10.
There
is
an
allocated
feature
that
will
allow
work
to
progress
for
priority
and
preemption,
which
is
what
this
whole
thing
is
worth,
but
definitely
it's
still
going
to
be
the
same
behavior
that
you
get
in
one
night
and
one
net
thanks,
kettle's.
I
I
So
we'll
start
off
with
an
introduction
to
the
current
stick
leadership
and
the
recent
leadership
update
that
we
that
we
had
at
the
last
cute
comp,
the
sync
OpenStack
is-
is
led
by
myself.
David
Lyle
from
Intel
and
Robert
Morris
from
Ticket,
Master
and
I
wanted
to
give
some
special
thanks
to
the
outgoing
cig,
leagues,
Igor,
Debrett's
key
from
the
CN
CF
who's,
whose
responsibilities
expanded
with
Disney
position,
and
so
he
stepped
down
and
Steven
Gordon
from
Red
Hat.
I
I
I
Essentially,
this
is
an
official
recognition
within
the
OpenStack
community
of
the
cig
communities,
OpenStack
leadership
and
organization.
We
have
the
same
leaders,
we
have
the
same
meetings
and
we
hold
the
same
objectives,
but
the
idea
is
is
that
is
that?
Because
this
is
a
cross
community
effort,
it
allows
us
to
take
advantage
of
OpenStack
resources
in
an
official
capacity.
So
this
means
that
at
our
OpenStack
summits
we
can
have
forum
sessions.
We
get
development
rooms
at
the
project
team
gathering,
which
I'll
talk
about
a
little
bit.
I
We
just
had
one
in
Dublin
and
we
had
an
entire
day
devoted
to
to
kubernetes
and
OpenStack
collaborations,
as
well
as
the
opportunity
to
have
repository
hosting
with
garrett
code,
review
and
OpenStack
and
for
testing.
You
know
we
see
this
is
kind
of
a
unique
governance
structure
that
is
primarily
here
to
incur
encourage
cross
community
collaboration.
I
So
we've
attended
a
number
of
events
with
you
know
we're
sig
OpenStack
has
had
representation.
This
has
included
the
OpenStack
ptg
in
Denver,
which
is
where
we
had
our
first
Sikh
communities
formation
meeting
the
OpenStack
summit
in
Australia,
where
we
had
our
second
formation
meeting
as
well.
As
you
know,
the
the
summit
itself
had
25
presentations,
workshops
and
forum
sessions
devoted
to
OpenStack
and
kubernetes
integrations.
We
also
met
at
the
cube
con
from
cloud
native
Condon
North
America.
One
thing
notable
that
came
out
of
this
effort
was
joint
community
leadership
meetings.
I
This
is
kind
of
a
hallway
track,
led
by
Terry
Cara's
who's.
The
vice
president
of
development
at
the
OpenStack
foundation,
as
as
well
as
our
first
community
sig
OpenStack
OpenStack
sake
committed
his
joint
update
and
deep
dives.
We
were
also
at
the
inaugural
helm
summit
in
Portland.
It
was
a
really
fantastic
event.
We
had
participation
by
the
kola
OpenStack
helm,
team,
low,
key
core
contributors,
low
kia's
container
packaging
for
for
OpenStack
for
OpenStack
projects,
as
well
as
the
state
leadership
and
a
full
of
date
of
meeting
and
workspace
at
the
OpenStack
ptg
gathering
in
Dublin.
I
Various
integration
points
across
OpenStack
projects,
including
Octavia
load
ballot,
which
is
a
load
balancer
service,
hookah
sea,
which
is
storage,
connection
courier,
which
is
provides
a
network
overlay
Manila
at
Manila,
which
is
also
a
file,
storage
and
keystone
for
authentication
and
authorization,
as
well
as
a
number
of
different
SDKs
container
container
interface
points
and
Korean
IDs
development
tools.
If
you
want
to
kind
of
see
some
of
the
outcomes
of
that,
this
etherpad
is
publicly
available,
and
you
know,
and
we'll
also
be
having
a
write-up
of
the
the
event
later
on.
I
Upcoming
events,
with
with
cig
OpenStack
representation,
are
going
to
be
the
open
networking
summit.
We're
gonna
have
a
number
of
community
leadership
meetings.
If
this
is
something
that
you're
interested
in
joining
us
for
please
reach
out
to
me
on
slack,
we'll
also
be
at
cube
comp
at
the
cloud
at
willoughby
cube
comp
yerevan
will
be
reminding
also
be
providing
cig
updates
and
the
deep
dive
working
sessions,
as
well
as
a
coded
containers,
talk
and
an
open
stack
provider.
Introduction
talk.
I
Of
course,
the
OpenStack
summit
of
Vancouver
will
have
a
full
schedule
selection,
including
a
number
of
container
base
talks.
We
have
a
container
infrastructure
track,
as
well
as
we're
running
a
site
conference
of
that
which
is
not
OpenStack
branded,
but
it's
focused
on
it's
a
collaborative
community
event
focused
on
CI
NCD
and
we're
gonna
have
a
number
of
different
leaders
from
the
CI
CD
community.
This
is
something
you're
interested
in
attending.
I
Also,
please
reach
out
to
me
and
I
would
be
happy
to
connect
you
with
the
right
people
for
that
continue
with
our
major
SiC
efforts.
So
one
of
the
one
of
the
KDP
s-
that's
that's
that's
an
effect.
Is
the
externalization
of
all
of
the
kubernetes
cloud
providers,
and
so
this
has
been
a
major.
This
is
an
a
major
effort
that
that
that
has
taken
place
in
the
1.10
release.
We've
worked
the
OpenStack
cloud
provider
into
an
external
project
managed
in
the
OpenStack
community.
I
Another
option
is
within
hosting
it
with
an
open
stack
Garrett.
This
is
this
is
a
topic
of
active
discussion
and
we're
looking
for
a
resolution
to
that
fairly
soon.
But
if
you
have
any
strong
feelings
about
that
or
input
on
the
final
disposition
of
that,
you
know,
you
know
please
stop
by
the
stag
OpenStack
channel
and
chat
with
us.
I
Work
that
we've
done
is
this:
the
external
provider
tree
has
several
integration
points
into
OpenStack.
The
entry
provider
has
been,
it
has
been
mirrored
there
and
developed.
You
know
we're
planning
on
a
deprecation
in
what
not
10
and
removal
Manoj.
Well,
the
external
provider
you
know,
gives
external
and
internal
lms
support.
You
know
with
both
Octavia
and
Neutron
l
bast
as
a
service.
I
Minh
grates
has
LVM
ice,
cutting
integration
with
cinder
and
set
our
DB
integration
with
cinder,
and
so
these
are
all
things
that
are
provided
out-of-the-box
with
the
with
the
provider.
The
repository
also
contains
a
number
of
different
sender:
integrations.
We
have
a
standalone
provider
with
lbm
ice
companies,
they'll
be
mi,
skuzzy
and
SEF
RTB
scenarios,
and
we
also
have
a
flex
volume
driver
and
a
CSI
von
dreiberg.
So
if
you're
interested
in
using
cinder
as
a
volume
provider
for
your
your
volume
storage,
we
actually
have
you
know
kind
of
these.
I
We've
also
made
improvements
on
routing
an
ipv6
and
so
for
note,
4
nodes
that
have
more
than
one
internal
IP
address.
We
have
a
new
algorithm
that
chooses
the
correct
route
that
works
for
both
ipv4
and
ipv6,
and
there
was
a
there
was
some
error
in
an
ipv6
support,
which
we
fixed
by
supporting
the
batchford,
the
matching
network
hikes
and
forcing
next
hop
to
be
in
the
same
network.
Space
we've
also
have
made
some
major
updates
to
Keystone
authentication
code.
I
This
is
the
this
is
the
code
that
allows
Keystone
to
be
an
external,
authentic
authentication
and
authorization
provider
for
kubernetes.
We
remove
the
experimental
code
from
upstream
and
we've
replaced
that
with
we've
replaced
it
with
an
external
provider
for
Keystone
that
uses
web
hooks.
If
you'd
like
to
see
an
example
of
how
this
works
so
very
Saverio,
proto,
proto
and
slack
has
a
pretty
good
blog
posting
about
how
he
set
up
Keystone
authentication
for
kubernetes
clusters.
I
We've
also
worked
on
developing
and
testing
within
the
cnc
FCIC
vCloud.
We
emerged
the
initial
deployment
of
that
code
into
the
cross
cloud
repository
we've
run
into
some
load,
balancer
bugs,
and
so
we're
still
tweaking
that
and
updating
the
the
provider.
On
that.
My
push
those
major
bug
fixes
over
the
last
week
and
we're
looking
at
full
test
integration
in
late
February
or
early
March.
I
Eventually
we're
going
to
have
this
work
incorporate
the
migration
to
the
external
cloud
provider.
Future
efforts
are
going
to
have
a
lot
more
testing
testing
testing
testing.
This
is
going
to
include
gate
jobs
for
the
external
provider
and
OpenStack
in
from
were
we're
taking
advantage
of
of
new
features
in
Zul
v3,
which
is
the
which
is
a
CI
CD
driver
developed
by
the
OpenStack
community.
I
I
I
There's
Korean
ideas
on
top
of
OpenStack
magnum
is
is
under,
is
under
heavy
support.
We
support
all
of
the
latest
versions,
including
1.10.
Alpha
are
back
support
by
default
final
calico
support.
You
know,
with
the
CERN
being
one
of
the
major
users
right
now:
they're
managing
a
hundred
and
fifteen
independent
Korea
Nettie's
clusters,
ranging
from
1.7
4.1.10,
as
you
know,
well
as
a
number
of
public
clouds,
including
banks,
host
city
network
and
easy
stack,
all
all
providing
communities
clusters
to
Magnum
innovate
as
well
as
catalyst
and
American
Airlines
working
towards
production.
I
We
have
an
independent
container
project
called
Zune,
which
provides
it's
a
smaller
API
surface
of
abouts
that
provides
kubernetes
pods
through
a
through
an
API
abstraction
container
networking
overlay
from
the
courier
project.
The
supports
that
has
it's
a
it
started.
As
a
doctor,
networking
interface
and
now
courier
kubernetes
has
a
full
network
overlay.
If
you
want
to
integrate
your
kubernetes
networking
with
your
OpenStack
virtual
machine
networking.
I
So,
if
you're
interested
in
collaborating
with
any
of
these
efforts,
we
have
a
slack
channel
cig
OpenStack,
as
well
as
a
crudite
cig,
OpenStack
mailing
list.
We
also
have
our
bi-weekly
cig
meetings,
these
with
the
daylight
savings.
That's
that's!
Coming
up
this
weekend,
these
are
going
to
be
happening,
Wednesdays
at
1600,
PDT
or
23
UTC.
It
was
formerly
a
0-0
UTC
on
thursdays,
but
we
but
we've
adjusted
this
for
us
daylight
savings
time,
and
that
is
it
for
my
update.
Does
anybody
have
any
questions.
A
F
A
Clarification
to
that
okay
questions
last
chance:
okay,
thank
you
very
much.
Chris
moving
on
Sebastian
Florrick's
from
cig,
UI
senses
regrets
he's,
unfortunately,
unable
to
attend
so
I've
pasted
the
notes
of
their
last
meetings
there
into
the
notes
document.
The
big
change
here
is
that
both
leads
are
now
tasked
to
do
other
things,
so
they
don't
have
as
much
time
to
work
on
the
dashboard
as
they
would
like,
and
they
need
some
help.
E
Ok,
so
I
want
to
get
a
quick
update
from
the
steering
committee
on
some
of
the
work
being
done
around
city
governance
and
some
projects.
I'm
paste
in
minun
notes
links
to
some
relevant
thoughts
that
looking
go
look
up
afterwards,
but
the
update
is
that
we
have
released
the
first
iteration
of
state
governance,
charters
and
requirements.
E
E
This
is
a
decision
for
each
state
here
and
there
that
is
like
how
do
I
handle
this.
One
case
right,
sort
of
thing
and
so
we're
going
to
release
a
very
long,
detailed
template
as
well
that
tries
to
cover
as
many
cases
you
can
think
of
and
all
the
cases
best
explained
to
us
now,
specific,
step-by-step
processes
or
how
you
address
those
things
and
the
contention
is
that
they
can
start
out
with
a
small
starter
and
then
look
at
the
long
term
pull
the
pieces
over.
E
E
The
other
change
that
you
may
notice
in
the
Charter
is
the
definition
of
two
roles
that
are
currently
the
sacral
now,
so
the
the
seat
role
did
not
mean
the
same
thing
across
states
and
some
things
it
means
this
is
the
technical
isolation
point
and
other
differences
did
not
mean.
This
is
technical
escalation
point
and
it's
more
a
role
for
managing
operations
of
the
state
and
so
not
having
this
role
consistent
between
mistakes
made
it
hard
to
go
from
one
thing
to
another
and
know
exactly
how
things
are
run.
E
It
also
meant
that
by
having
this
as
one
single
role
in
certain
SIG's
individuals,
who
maybe
are
the
actual
technical,
should
be
the
technological
ation
point,
it's
not
clear
that
they
are
because
the
sig
lead
is
named
of
the
technical
escalation
point,
or
vice
versa.
Folks,
who
want
to
help
with
leadership
outside
technical
leadership
or
didn't
have
that
opportunity.
So
we
put
that
out
to
two
separate
roles.
You
can
still
have
the
same
person
doing
both
just
now,
it's
really
clear,
but
they
have
both
responsibilities
being
ambiguous
about
what
responsibilities.
They
have
more
about
them.
E
Explanations
behind
that
in
the
back,
which
is
last
link
there,
we're
putting
together
a.
Why
is
essentially
so,
in
addition
to
the
documenting
all
be.
How
do
I
do
this,
that
we
get
from
sake
and
trying
to
put
that
into
a
process
document
we're
also
going
to
working
on
documenting
the
like?
Why
did
you
choose
to
do
it
this
way,
so
that
everyone
I.
C
A
C
C
Not
sure
maybe
I've
missed
it,
but
Direction
about
how,
like
we
really
don't
want
that
stuff
to
be
in
like
meeting
you
know.
This
is
something
that
we
would
expect
to
be
changed.
Pretty
often
like
doing
pull
requests
through
under
the
sig.
That
means
they
get
community
plays.
Doc
is
probably
not
the
right
way
to
do
it.
Did
you
have
any
guidance
or
suggestion
on
that?
Is
your
question.
E
E
Know
so
Bob,
it
kind
of
depends
on
what
purpose
you
want
to
attract
members
if
it's
for
communication
purposes
we're
not
proposing
any
changes
to
existing
mechanisms
in
terms
of
the
mailing
list
and
get
up
teams,
and
so,
if
you're,
asking
about
decision
processes
like
votes
or
things
like
that,
while
we're
we
are
proposing,
is
that
those
be
recorded
in
owner's
files
and
checked
in
so
then.
So
what
we're
good
recommending
is
that
they're,
the
sake
identify
roles
that
need
to
be
filled,
sort
of
like
the
release
team.
E
We
have
existing
roles
already
for
at
the
sub
project
level.
We
have
reviewers,
approvers
and
now
owners
which
are
the
sub
project
leads.
If
the
sig
identifies
additional
roles
that
need
to
be
filled,
those
should
be
documented
in
owners
files.
So
you
can
add
comments
and
review
those
changes
and
have
an
audit
trail
and
all
those
sorts
of
things.
So
there's
the
set
of
people
with
specific
roles
and
the
sub
projects
and
SIG's
would
be
a
notice
filed
the
people
who
just
want
to
get
communications
and
are
free
to
participate,
meetings
and
things
like
that.
G
One
fine
point
on
that
and
I:
don't
think
we
have
this
totally
locked
down
there,
but
my
guess
of
where
we're
we're
I
think
this
might
go
is
that
the
this
set
of
members
for
decision-making
purposes
will
across
all
the
SIG's
will
essentially
become
the
folks
that
have
standing
for
doing
things
like
collecting
the
steering
committee
and
things
like
that,
and
so
that
gets
us
into
a
much
more
both
distributed
and
formal
process
around.
That
I.
G
A
Okay,
moving
on
just
three
quick
announcements,
just
another
reminder
that
the
contributor
summit
is
having
a
cube
con
EU
the
day
before,
that's
one
may
you
can
click
through
to
the
link
there,
and
then
we
have
both
tracks
and
there's
a
sig
dots
track
as
well.
If
you're
interested
in
that
the
CN
CF
would
like
some
feedback
on
the
draft
blog
post
for
the
1.10
beta
released,
follow
the
link
and
you
can
contact
natasha
woods.
I
left
her
email,
there's
part
of
the
notes
and
last
thing
for
the
week
is
the
shout
outs.
A
We
have
hash
shout
outs
on
slack,
so
someone's
doing
a
great
job
and
you
want
to
see
them
recognize
a
little
Buffum
yah.
We
mentioned
everybody
every
week
so
this
week
the
release
team
would
like
to
mention.
Maroon
newbie,
coal,
wanna
and
Benjamin
alder
who've
been
all
super
helpful
in
getting
this
release
moving
forward
and
then
all
the
contributors
who
hang
hung
out
on
the
hangouts
for
the
meet
our
contributors.
A
Ask
us
anything
sessions
that
we
have
where
we
have
existing
contributors,
sit
in
a
livestream
similar
to
this
and
have
new
contributors
asking
questions
so
big
shoutouts
to
erin
creek
and
berger
DIMMs,
Ilia
Demetri,
chenko,
jennifer
Rondo,
chris
Novus
oli,
Ross,
Jeff,
Gregg,
Grafton
and
Jorge
Castro,
and
with
that
that
concludes
a
meeting
any
any.
Last
any
last
questions,
Diane
Miller
asks
any
link
to
request
an
invite
to
the
cube
contributors
summit.
A
We
have
not
yet
gotten
to
the
point
due
to
the
the
size
of
the
venue
where
we
are
feel
that
we're
going
to
do
invites,
but
if
we
do
have
a
capacity
limit,
we'll
approach
that
problem
when
we
get
there
so
currently
right
now,
if
you're
an
existing
contributor,
you
shouldn't
have
to
do
any
extra
steps,
any
other
questions
before
we
break
alright
and
with
that
everyone
gets
9
minutes
back.
Thank
you
very
much.
Everyone
for
attending
and
we'll
see
you
all
next
week.