►
Description
Agenda: https://bit.ly/2U02wXN
A
A
B
So
the
usual
updates
are
already
in
the
agenda
documents,
so
I'm
not
gonna,
go
over
that.
The
only
thing
I
wanted
to
point
out
is
the
contributor
summit
that
we
are
starting
it
brand
new
this
time
in
Philadelphia.
It
will
be
a
day,
zero
activity
from
12
p.m.
to
3
p.m.
so
we're
just
doing
a
few
hours
on
day
0,
it's
open
for
all
contributors,
new
old
veterans,
whatever,
if
you
have
contributed
any
at
any
point
of
time,
even
if
you
don't
contribute
any
more
you're.
B
A
B
What
are
the
things
that
could
go
better,
that
we
could
have
done
better
and
all
those
kind
of
discussions
so
bring
all
your
thoughts
and
all
your
suggestions
and
feedback
I
will
have
a
white
board
set
up
as
well
in
the
room
in
case,
you'll
want
to
kind
of
draw
any
components
or
the
any
kind
of
whiteboarding
kind
of
discussion.
So,
as
I
said,
the
agenda
is
definitely
driven
by
the
community.
It
I'm
I
got
some
suggestions
from
the
community
for
this
particular
one
based
on
how
it
goes
this
year.
B
We
will,
we
will
reevaluate
as
in,
should
we
have
it
as
a
longer
session
should
we
have
it
as
an
entire
day,
set
in
day
long
session
on
day
zero,
or
should
we
make
it
as
a
great
kind
of
activity
so,
based
on
the
feedback
that
we
get
this
year,
we
will
definitely
change
it
and
make
it
even
better.
In
the
subsequent
segments,
yeah.
A
B
A
C
C
When
you're
doing
those
rolling
deployments
and
apps
they've
also
been
working
a
lot
with
the
CLI
team
as
part
of
the
view,
3d
acceleration
team
and
so
they've
recently
posted
on,
so
you
have
to
have
with
some
plans
around
completing
the
v3
controller,
API
and
transitioning
everything
to
that
over
some
period
of
time.
So
I'm
expected
that
affects
a
lot
of
us
on
this
call
and
I'm
sure
they
would
appreciate
feedback.
C
Likewise,
the
lager
Gator
team
is
kind
of
deprecated
their
older
components
and
api's
that
they
support
and
there's
already
been
some
discussion
on
CF
dev
about
deprecating.
The
fire
hose
end
point
in
particular,
because
through
a
lot
of
integrations
built
against
that,
so
they're
certainly
continuing
to
see
feedback
on
that
deprecation
where
them
encourage
people
to
contact
those
teams
directly
either
on
CF
dev
or
on
slack
or
via
email.
A
few
other
highlights
the
routing
and
container
networking
teams
have
been
discussing
ways
to
reorganize
and
they've
merged
into
a
single
networking
team.
C
That's
supporting
both
existing
GA
components
like
the
co,
routers
and
container
networking
via
silk,
but
are
also
working
on
integrating
components
of
this
Bo
into
Cloud.
Foundry
to
support
the
next
generation
of
the
rabbit
ear
so
effectively
the
same
work
they've
been
doing
about
trying
to
get
more
aligned
in
terms
of
their
team
structure
and
angles,
so
they
can
operate
more
efficiently.
C
Since
you
making
a
lot
of
good
progress,
they've
been
focused
a
lot
on
hardening
so
making
sure
that
ireenie
can
run
and
a
redundant
hell
available
configuration
to
reduce
downtime
during
updates
or
unexpected
failures
sounds
like
they're
working
on
enforcing
all
the
container
limits
that
we
expect
for
CF
workloads
and
then
some
other
work
to
make
it
easier
to
opt
into
native
staging
the
ireenie.
Instead
of
delegating
some
of
that
work
to
diego
buddy.
D
A
A
Okay,
so
let's
keep
watch
for
a
minute
and
go
to
extension
and
hopefully
he'll
join.
He
usually
joins
and
give
us
an
update
on
the
way
so
for
extensions,
I.
Think
I
added
a
few
highlights
from
the
last
meeting
we
had
in
January
and
I
think
the
Apple
is
killer.
For
instance,
they're
trying
to
scale
to
their
testing
I
guess
scaling
to
20,000
acts.
They
are
integrating
with
straddles,
not
stratosphere,
I'll
fix
that
all
right,
weather
and.
A
We
have
a
new
COI
plugin
bill,
packs,
similar
to
what
Eric
was
saying
about
that
CF
Linux
Tepes,
three,
that's
one
of
the
chance
they're
making
and
there's
two
new
go
packs
against
one
for
our.
You
know
nginx,
so
maybe
you
want
to
check
it
out
and
then
there's
also
a
lot
of
work
going
on
to
support
the
bill,
packs
that
IO
initiative.
So
that's
definitely
something
to
look
into
I
actually
had
planning
to
chat
with
them
today.
So
hopefully,
next
time
we'll
have
a
better
update.
A
The
back
use,
there's
various
refactoring
bug,
fixes
I.
Think
one
thing
about
abacus
to
mention
is
that
the
SAE
team
wants
to
I
guess
go
in-house
with
a
version,
so
version
two
that
we
have
planned
probably
won't
happen,
but
this
current
version
will
just
stay
and
then
you
know
go
into
some
kind
of
an
inner
life.
So
if
you're
interested
in
the
backups
and
interested
in
you
know
this
direction,
then
definitely
ping
me
or
Easter
Seals.
Oh
sorry,
and
we
can.
A
We
can
update
on
that
more
detail
or
join
the
the
extension
scholar
that
that
should
be
discussed
there
and
Stratos.
Probably
like
the
highlights
of
all
of
extensions,
it's
going
on
very
well.
This
new
feature
is
being
added
all
the
time
they
had
a
CVE.
So
if
you
didn't
see
this
and
you're
using
it
definitely
ping
me
or
being
the
team,
they
have
a
Stratis
channel.
So
you
may
want
to
go
there
and
learn
about
that,
but
version
2.3
dot.
Zero
is
really
stuck.
You
go
to
their
website
to
the
github.
A
You
can
see
the
different
really
two
months.
I,
don't
know
if
they
included
the
details
of
the
CV
but
journey.
You
know
now
that
is
out.
You
could
discuss
with
the
audience
multi
apps
we
had
an
update
in
January
from
the
SSAT
team
and
I.
Think
one
cool
thing
there
to
mention
is
the
M.
It's
the
assemble
tool
that
they
added.
So
you
can
assemble.
You
know
your
multiple
apps
from
different
things,
including
the
free
image.
So
that's
cool
and
then
blockhead
I
think
this
this
team,
one
of
the
key
thing
they
did.
A
They
added
support
for
the
I-pro
later
fabric.
So
now
you
can
use
the
blockade
to
deeper
into
the
hyper
later
and
actually
saw
a
demo
of
this
by
swetha,
a
member
of
the
team
and
I
didn't
think,
and
it
was
pretty
cool
so
who
that
service
you
can
deploy.
Another
me
when
I
put
a
drip,
but
also
I,
guess
on
ethereum
Lima
is
always
on
the
calls
he's
actually
on
this
call.
So
I
kept
questions.
A
A
Like
okay,
I'll
think
what
to
do
to
paying
them
and
find
out
what's
going
on,
because
fred
has
been
very,
very
good
at
coming,
pretty
much
the
pass.
Every
single
you
know
cap
call,
so
maybe
something
going
on
he's
in
Montreal
they,
you
know
how
to
deal
with
the
cold.
So
I'm
sure
it's
not
about
that.
There
must
be
some
other
reason.
Maybe
he's
on
vacation.
Okay,
so
we'll
get
to
the
talks.
Then
let
me
share
my
screen
because
I'll
be
the
one
presenting
first.
My
presentation
should
be
short,
so
I'm
gonna.
Do
we
think.
A
A
It's
it's
just
a
short
presentation
on
the
results.
So
obviously
the
goal
is
to
do
a
retro
right.
I
mean
we
can't
really
do
retro
for
cat
calls
every
month,
since
the
call
is
every
month-
and
you
know
it
will
be
I
guess
you
know
it
makes
sense
to
wait
for
a
few
and
yearly
seems
about
right.
We
did
it
last
time
and
it
worked
out.
So
the
goal
is,
of
course,
to
collect
it
back
and
then
see
what
we
can
do
to
to
improve
I.
A
A
First,
how
often
have
you
attended,
or
do
you
attend
pretty
much
so
I
can
get
an
idea
of
whether
or
not
you
know
you're
an
outsider
or
you're
part
of
the
team
that
attend
usually
what
you
enjoyed
the
most
and
then
constructive
feedback.
So
keep
it
simple.
If
you
have
comments
on
the
questions
in
terms
of
should
I
ask
different
questions
for
next
year,
then
definitely
ping,
but
I
think
this
kind
of
keeps
it
small
and
short.
A
So
we
had
24
responses
when
I
had
stopped
it.
You
know
last
one
day
compared
to
last
year,
it's
about
the
same
I,
don't
know
if
it's
the
same
people
that
actually
responded.
So
it's
not
clear,
but
it
was
about
25
last
year,
so
I
guess
and
we
have
about
25
people
trying.
Maybe
it's
just
us,
which
is
I,
guess
good
to
some
extent.
A
A
So
you
know
we
I,
guess
a
goal.
This
would
be
to
try
to
convert
those
eight
people
that
did
not
to
attend
all
right,
so
I
think
they
were.
There
was
one
response
in
terms
of
the
constructive
feedback
that
explains
why
people
can
attend,
which
is
you
have
a
I
mean
I'll
spare
you
I'll
tell
you
right
now
is
that
people
complain
that
the
ATM
time
is
very
early.
A
A
This
is
kind
of
a
the
the
words
list
that
came
out.
My
name
came
come
up
there,
so
I
think
my
payment
to
a
couple.
People
were
one
point,
but
otherwise
I
think
people
super
happy
about
the
demos
and,
of
course,
the
project
updates,
which
is
not
surprising,
because
I
was
one
of
the
things
that
we
fixed
to
make
that
better.
A
So
this
is
the
tag
cloud
for
all
the
words
so
the
way
I
generated
those
and
also
the
word
list
is
to
take
all
the
comments,
because
in
each
question
we
allow
people,
to
put
you
know
two
three
four
in
comments
and
I
convert
them
to
to
lowercase
and
then
I
run
to
this
tool.
That
removes,
you
know
basic.
A
You
know,
propositions
and
stuff
like
that
and
then
generates
a
tag
cloud,
and
you
can
see
clearly
that
demos
are
the
key,
so
we'll
try
to
keep
doing
up
and
then
last
question
question
number
two
I
guess:
I'm
sorry
question
number
two
highlight
so
I
took
some
of
the
responses
that
I
thought
were
interesting,
so
getting
demos
presence
of
the
complete
foundation.
That
was
only
one
time
shown,
but
definitely
it's
it's
good
to
see
that
people
care
about
that
right
and
then
that's
mean
you're.
A
A
Think
one
of
the
big
thing
that
came
out
that
probably
a
good
action
item
doesn't
show
very
highly
because
only
one
person
mentioned
it
is
to
break
the
YouTube
videos
into
smaller
sub
videos.
So
that
way,
like
somebody
wants
to
see
Constantine
talk,
they
don't
have
to
go
to
me
and
the
highlight.
So
if
you
want
to
always
see
the
highlights,
then
break
out
the
highlights
so
I
thought
that
was
a
that
was
a
better.
That
was
a
very
good
one.
I
thought.
A
C
A
Exactly
my
payments
are
working,
but
it's
that
that's
not
what
it
is.
I
definitely
think
the
the
breaking
the
YouTube
video
would
help
because,
like
if
you
watch
YouTube
and
and
you
watch
a
multi
segment,
show
it's
great
to
have
like,
for
instance,
I
watch.
You
know
Stephen
Colbert
every
day
and
I
watch
it
on
YouTube,
because
I
watched
the
little
segments
I
don't
have
to
watch
the
whole
show
right
so
so
I
think
that's
a
definitely
a
very
good
one.
A
Let's
see
some
of
the
highlights
of
this,
so
more
company
representation,
so
that's
not
showing
and
the
tag
cloud
because
it
was
mentioned
once
but
I
extracted
it
and
and
that's
probably
a
fair
point
because
it
seems
like
we
have
lots
of
talk
from
sa
PE
stocking
away
and
used
to
do
a
lot
of
talk
with
you.
You
know
the
author
need
well-represented
and
then
of
course,
the
key
members
of
the
foundation,
a
little
idea
and
so
on.
But
we
probably
should
try
to
reach
out
to
smaller
companies
and
see
if
they
have
talks
right.
A
So
let's
I
think
that
to
you
keep
up
the
good
work
so
I'm
happy
people
are
happy
about
this.
That
came
up
at
least
two
or
three
times
so
that's
good
and
then
a
TMS
effect.
Yes,
I
just
don't
know
what
else
to
do.
Is
somebody
suggested
8:30?
The
problem
with
8:30
is
that
if
it
would
finish
at
9:30,
then
that
means
that
it
eats
into
people
like
Eric,
who
you
know
for
a
pivotal
here,
have
to
go
to
a
stand-up
right.
So
that's
a
bit
challenging
and
then
nothing
which
is
also
nice
right.
A
So
people
don't
want
any
change.
Okay
and
then
the
last
thing
is
I.
Guess
I
had
one
more
keep
it
up
and
then
I
put
some
of
the
text,
as
is
except
converted
to
lowercase
into
the
presentation.
So
if
you
want
to
see
that
click
the
link
in
the
agenda
and
then
you'll
get
access
to
that
presentation
and
see
it,
if
you
want
to
see
the
2017,
let
me
know-
or
if
you
go
back
and
time
in
the
cab
agenda,
then
you
should
see
a
link
to
it.
A
That's
all
I
have,
let's
see
if
we
have
any
questions.
Maybe
if
you
have
like
feedback
that
you
couldn't
give
at
the
time
or
you
didn't
take
the
survey,
maybe
you
can
give
it
now,
it's
all
fair
I
mean
you
know.
We
want
to
make
this
time
the
most
productive
time
that
we
can
for
the
Cloud
Foundry
foundation,
and
you
know
we're
always
happy
for
your
feedback.
So
any
comments.
A
A
E
Right
so
we'll
stop
doing
that.
That
was
really
loud,
hi,
I'm,
Konstantin
and
yeah
I
was
asked
to
introduce
or
present
the
stuff.
We
did
around
silk,
it's
a
prototype,
but
I
mean
we
thought
it
might
be
a
cool
thing
to
have
in
the
future.
So
this
is
what
we
did
and
why
we
did
it.
So
the
problem
is
CF
uses
Stoke
for
CNI.
E
Kubernetes
has
a
lot
of
choices,
but
none
of
them
run
on
Cloud
Foundry,
so
there's
no
common
ground
whatsoever
and
it
would
be
really
nice
to
have
microservices
with
UDP,
maybe
because
I've
seen
customers
do
that
and
maybe
even
microservices
with
exotic
protocols.
But
then
again
you
have
to
solve
how
you
were
going
to
expose
that
one
solution
could
be
a
natural
tur
that
does
the
bridging
to
the
silk
over
the
network
itself
and
then
just
routes
2d
parts
directly,
but
I'm
not
going
to
go
too
deep
into
that
area.
E
So,
essentially,
silk
is
a
CNI
after
the
container
networking
interface
specification
and
there
wasn't
really
that
much
work
to
be
done
to
make
it
run
on
kubernetes.
We
didn't
really.
Actually
we
didn't
really
change.
The
actual
code
we
both
rappers
to
prototype
faster
and
not
have
to
compile
all
the
time
and
at
some
point
it
started
working,
so
you
could
actually
just
use
ireenie,
but
that
would
completely
leave
out
cute
clusters
that
are
standalone,
maybe
deployed
by
a
GCP
or
other
mechanisms.
D
E
The
public
yeah,
so
I
didn't
really
know
how
much
time
I
have
so.
The
slides
are
really
just
scratching
the
surface.
I
wanted
to
just
have
to
take
them
all
at
the
end,
so
people
will
actually
see
it
working
well,
it
kind
of
just
works,
but
it
kind
of
just
works.
I
mean
we
had
to
deliver
a
purr
around
an
offender,
because
cube
communities
looks
for
an
interface
that
has
scope
global
while
still
creates
in
interface.
That
is
scope
linked,
which
makes
sense.
If
you
look
at
the
networking
architecture,
I
mean
soap
by
itself.
E
Just
builds
bridges
from
the
container
hosts
and
then
routes
that
traffic
and
somewhere
in
between
there's
a
policy
API
that
creates
IP
table
rules
to
allow
or
block
traffic,
but
having
that
run
on
a
group
worker
just
worked
out
of
the
box,
so
we
just
co-located
so
policy
API,
exelon
agent
and
the
daemon,
and
then
again
that
already
created
our
bridge
and
the
only
thing
to
solve
from
that
point
on
was
to
figure
out
how
to
actually
mount
one
of
those
silk
interfaces
into
a
cube
container.
The
nice
thing
about
this
would
be
so.
E
First
of
all,
we
can
still
use
helm.
We
have
a
pretty
much
vanilla,
kubernetes
running
there.
The
only
change
we
had
to
make
was
to
add
a
configurable
CNI
path,
so
we
have
a
custom.
Cfc
are
released
running
under
the
hood,
but
the
only
thing
we
did
there
was
pretty
much
make
the
CNI
path
and
the
path
to
the
config
and
the
binder
is
configurable
and
the
easy
wins
there
are
isolation,
but
also
interconnectivity
between
different
cube
clusters,
as
well
as
cloud
foundries
I
mean.
E
If
you
take
that
thing
to
another
level,
you
could
once
you
figure
it
out
to
foundations,
having
two
foundations
using
the
same
silk
agent.
You
could
just
link
containers
in
between
them
once
you
externalize,
the
actual
routing
API
from
the
Cloud
Foundry
deployment,
and
the
other
really
nice
thing
is
that
it
does
not
have
any
dependencies
on
your
infrastructure
layer.
So
no
and
this
xt
required
no
load
balancer,
it's
a
service
required
whatsoever
and
you
also
gets
the
CF
policy
system
for
free.
That
also
just
works.
E
We'll
see
that
in
a
few
minutes,
so
I
added
the
links
to
what
we
did
here
again.
So
there's
two
blog
posts
currently
I'm
writing
up
a
third
one
that
actually
explains
in
depth
what
we
had
to
do
with
the
IP
tables
and
how
to
deploy
that
thing.
Once
that's
done,
I'm
gonna,
probably
post
it
and
all
says
like
if
people
want
to
rebuild
that.
Currently
there
most
of
the
cia's
in
here
I
updated
it
a
bit
to
work
with
the
latest
CF
deployment.
E
I
had
to
redeploy
everything
because
that
required
both
DNS,
which
we
didn't
have
to
have
back
then
anyway,
let's
just
switch
to
the
detect
demo
so
on
the
Left
I
have
targeted
ICF
I
pushed
the
event
just
to
have
something
running
in
there,
and
here
I
do
have
a
nikkie
CFC.
Our
cluster
target
I
didn't
really
deploy
any
specific
workloads,
I'm
just
gonna
use
whatever
is
there
in
cube
system?
E
E
E
Not
crashing
or
something
and
yeah
so
I,
probably
I,
don't
know
whether
enabled
access
yesterday
evening
or
not
I'm
just
going
to
disable
it
for
the
moment.
What
this
group
does
is
pretty
much
just
see
if
curl
the
policy
API,
the
policy
API
uses
an
app
ID
as
the
policy
ID.
So
what
we
did
for
pods.
We
just
looked
up
the
parent
of
that
part,
which
could
be
a
deployment
replica
set
or
whatever
and
use
the
UID
of
the
parent
of
the
pod
for
the
policy.
E
E
E
Yes,
so
here
is
where
you
see
that
we
actually
just
look
up
stuff
from
the
kubernetes
api
that
should
be
accessible
on
the
worker
anyway,
so
whatever
creates
the
policies
within
silk
to
do
that
easily
as
well
CFC
are
already
has
clustered
credentials
from
the
on
there,
there's
not
much
to
change
around
it.
If
I
now
rerun
this
again
array,
but
anything
else
you
won't
see
or
have
questions
I
can
go
into
the
actual
code.
E
E
You
definitely
need
them
on
the
workers
and
I
think
you
end
up
having
them
on
the
master
as
well,
because
this
is
where
queue
proxy
runs
from
and
Hugh
proxy
needs
to
be
able
to
access
the
silk
eyepiece,
so
those
service,
Service
IPS,
will
still
work.
The
thing
is,
with
the
service:
eyepiece
I
mean
they're
just
destination
net
to
the
actual
pod
IP
that
gets
assigned
from
this
unite
and
therefore
the
master
needs
to
know
about
it,
but
that
also
works
and
I
can
what's
the
cube
proxy.
E
E
So
it's
just
either
if
it
said
the
default
call
do
disks
or
change
to
scope,
link
because
that's
as
well,
our
kubernetes
would
fail
look
up
the
IP
address
of
the
actual
pod
and
then
the
CNI
wrapper
itself
is
not
too
complicated.
I
mean
I'm.
Just
changing
up
a
few
things
in
whatever
gets
passed
to
the
actual
silk
binaries.
It.
D
Would
be
super
nice
if
there
was
a
way
I
guess
it's
like
a
really
hard
thing
to
request.
There
be
super
nice.
If
there's
a
way
to
do
this,
like
a
bit
more
abstractly
than
CFC
are
like
it
seems
like
this
needs
to
be
kind
of
in
CS,
yokes
and
used
to
patch
into
NS
enter
we'd,
be
really
nice,
like
Mac
says
if
it
could
be
a
demon
set
so
that
it
was
kind
of
applicable
to
any
communities.
C
SEO
is
great,
but
there's
like
a
lot
of
people
using
various
other
Cuban
at
theses.
Yeah.
E
E
So
this
is
what
I
came
up
with
there's
still
a
few
things
that
were
a
work-in-progress
like
every
having
the
master
in
there
and
also
doing
the
IP
tables
for
all
the
workers.
Currently,
it's
only
working
with
one
worker,
but
essentially
we
pretty
much
allow
to
implement
the
kubernetes
networking
model
from
within
a
workers,
24
range
or
all
of
the
workers.
24
I
signed
24
range.
E
We
need
to
make
sure
that
stuff
in
cube
system
or
cube
public
name
spaces
has
to
be
available,
but
then
again
this
could
be
done
within
the
actual,
an
extra
plugin.
So
the
CNI
config
gives
you
a
list
of
plugins
to
call
and
we
could
just
have
a
silk
kubernetes
plugin.
That
then,
takes
care
of
those
extra
steps,
and
apart
from
that,
we
don't
really
do
much.
So
we
change
up
the
namespace
and
depart
and
get
again
on
the
reference.
E
E
Entered
deploys,
I
mean
I,
had
no
manual
intervention
whatsoever.
The
CF
itself
just
add
an
extra
ops
file
for
exposing
silk
and
scaling
everything
to
one,
because
we
wanted
to
share
save
resources.
There's
no
point
in
having
more
than
one
on
CF
side,
but
it
will
be
probably
a
bit
tricky
to
figure
out
the
whole
networking.
Why
IP
tables
within
a
cube
cluster
itself-
and
it's
also
obviously
limited
to
IP
tables
for
the
moment,
because
that's
also
it
works
so
not
know
IP
vs,
for
example
like
calico
or
we've.