►
From YouTube: CNCF TOC Meeting - 2018-08-07
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
D
If
I
could
interrupt
as
well,
we
normally
pass
the
magenta
line,
but
could
I
just
call
out
when
everyone's
here
on
July
5
at
a
deadline,
this
Sunday
August
4
coupon
cloud
native
con
Seattle,
so
we're
expecting
six
to
seven
thousand
people
at
this
please.
This
is
the
week
to
submit
and
get
so.
Your
colleagues
in
your
company
or
organization
to
submit.
A
A
E
B
This
this
one
does
have
a
vote
and
congratulations
to
prometheus.
That
is
the
second
graduated
project,
after
a
long
journey
in
incubation
and
quite
a
bit
of
effort
by
everybody
to
get
this
thing
into
good
shape,
making
sure
that
the
graduation
process
is
very
meaningful.
Indeed,
this
was
something
that
we
voted.
G
B
A
A
Okay
and
then,
more
importantly,
there's
three
projects
that
have
requested
to
present.
This
is
something
Brian
grant
kind
of
requested
that
we
cover
over
TOC
calls
before
we
decide
to
allow
10%
or
not
so
we
have
strumsy
habitus
and
we've
met.
We
definitely
don't
have
to
make
that
decision
today,
but
I'm
pointing
them
out
here
for
you
to
consider.
Okay.
B
B
A
A
F
B
F
B
B
These
are
some
of
the
things
that
we
asked
for
there
was.
There
was
a
good
response
from
the
GB
on
this
request
that
it
was
read
very
much
as
a
sort
of
set
of
requirements
for
how
the
GB,
in
the
end
user
group,
could
kind
of
go
away
and
come
up
with
ways
interacting
with
their
TOC.
So
what
we're
not
presenting
here
is
a
solution
more
of
a
question
to
the
GB
and
to
the
end
user
group.
How
can
you
help
us
to
find
out
more
about
the
projects?
Okay,
okay.
Next
slide.
B
We
have
some
talk
especially
kind
of
in
their
water
cooler
moments
as
well
around
the
future
growth
of
the
CNC
F,
and
the
importance
of
retaining
a
high
level
of
clarity
around
what
we're
doing
and
why
we're
doing
it,
what
projects
are
for
and
how
they
fit
together.
So
you
know,
I
have
personally
expressed
the
concern
that
the
landscape
I'm
still
being
sometimes
presented
in
this
in
all
of
its
true
glory,
consisting
of
everything
that
might
have
something
to
do
with
cloud
native
and
here's
a
link
to
some
commentary
on
on
Twitter.
F
B
About
that,
and
then
you
know,
Dan
pointed
out
that
we
have
the
trail
map,
which
is
getting
good
traction
as
more
of
opinionated
guide
that
actually
refers
to
the
projects
in
the
CNC.
F
I
think
it's
very,
very
important
that
we
that
we
continue
to
give
people
opinion
around
what
we're
doing
and
don't
get
to
school.
So
we
talked
about
some
of
the
potential
threats
there,
including
that
as
kubernetes
gets
more
and
more
popular.
That
is
a
potential
of
having
X
for
kubernetes
for
really
any
value
of
x.
B
For
example,
I
mentioned
the
project
cortex,
which
I'm
close
to
because
it
came
out
of
we've.
You
know
that's
a
splicing
between
the
Prometheus
and
kubernetes,
where
those
things
so
on
the
next
slide.
Please
here
is
a
proposal,
so
this
is
not
a
formal
proposal
at
this
point,
but
it's
something
that
I
thought
we
could
discuss
as
the
year
goes
by.
B
Around
it,
there's
also
the
potential
to
have
vertical
ization
in
the
future
and
there's
also
different
classes
of
project
like
Etsy
D.
You
know
we
we've
previously
talked
in
the
TOC
about
importance
of
xcd
coming
in
and
being
a
stable
stability
first
component,
rather
than
trying
to
express
great
velocity.
B
Some
of
the
things
that
might
be
provided
by
by
different
categories
would
be
white
papers,
a
category
specific
landscape,
for
example,
the
security
landscape
of
the
service
landscape
that
we've
already
seen
and
service
working
group,
more
patterns
that
have
focused
reference
architectures
around
the
category
and
obviously
working
groups.
This
would
also
give
us
a
mechanism
for
migrating
the
working
group
model
to
something
that
I
think
has
a
bit
long-term
value.
B
E
C
F
F
B
Yeah
I
think
something
like
this,
especially
if
we
have
a
proliferation
of
new
functionality
on
top
of
kubernetes
I.
Think
that's
going
to
present
all
kinds
of
challenges
which
we've
managed
so
far
to
shy
away
from
which
I'm
quite
glad
about,
but
I'm,
not
sure
what
that's
something
we
can
bastogne
indefinitely.
B
F
Actually
think
taking
the
existing
projects
and
some
of
the
prospective
ones
from
the
from
the
upcoming
presentations,
and
also
from
the
landscape
or
just
other
projects
that
we
know
about,
not
that
they
necessarily
we
come
into
this
scene
CF
but
just
say.
Well,
we
took
this
of
50
projects
and
we
applied
this
approach.
This
is
what
it
would
look
like.
I
think
you
know,
having
some
concrete
examples
would
help
a
lot.
F
H
G
B
G
So
I
understand
it,
but
I
just
want
to
make
sure
that
we
keep
that
that
objective
in
mind,
because
I
think
what
we
don't
want
to
do
is
end
up
in
endless
adjudication
over
tax
on
amazing,
a
project
in
one
spot
or
another,
because
it
there's
a
perception
that
to
be
taxonomy
as
one
way
means
one
thing,
and
another
thing
means
another
way
means
another
thing.
I
think
we
want
to
make
sure
that
the
emphasis
is
on
clarity.
This
is
not
a
value
judgment
that
we're
really
trying.
G
Ideally,
you
want
to
have
a
taxonomy
that
allows
projects
to
effectively
economize
themselves
in
a
way
that
is
accurate.
We
don't
want
to
do
and
I.
We
have
already
seen
us.
Do
it
a
lot?
That's
where
we
end
up
in
you
know
in
a
lot
of
adjudication,
for
these
fine
distinctions
that
don't
necessarily
have
a
difference
and.
E
E
There
may
well
be
more
than
one
proposed
taxonomy
I
can
imagine
one
by
area
and
one
by
you
know
anchor
project
or
something,
and
maybe
a
combination
of
those
and
just
just
write
those
down
and
be
very
clear
what
they
are
and
what
the
goals
are
before.
We
start
arguing
about
whether
this
taxonomy
is
better
than
that
one
yeah.
B
B
Okay,
so
Chris
is
suggesting
that
we
continue
the
discussion
in
the
CNC
F
dash
reference
architecture
working
this
very
list.
I
take
it.
Everyone
is
able
to
access
that.
You
should
sign
up
for
it.
If
you
haven't
tell
Ken
so
can
we
come
back
for
the
next
toc
meeting
with
a
list
of
potential
categories
from
the
architecture?
Maybe
even
present
the
current
working
architecture.
V2
is
at
the
same
time.
B
B
Care
about
this,
for
me
is
very
important
that
if
the
projects
are
happy
than
other
things
will
follow,
we
had
the
discussion
about
the
trademark
thing
that
that
came
up
around
Riley
and
that's
in
solution.
I
hope
is
in
progress
and
to
say
about
this,
and
this
is
something
that
a
slide
that
I
showed
in
Copenhagen
I
mean.
Maybe
you
know
another
thing
we
could
do
off.
The
back
of
categories
is
have
some
kind
of
darris
a
road
map.
B
I
Can
you
guys
hear
me?
Yes,
okay,
yeah,
let's
start
so,
I'm
told
I'm
not
talk
about
CD
like
how
it's
built
internally,
like
darker
sure
thing,
and
then
this
is
going
to
be
very
high
level
like
for,
like
only
in
a
50
minute,
and
then
please
ask
any
questions
you
might
have
at
the
end
of
a
citation.
So
next
slide.
Please
so
SAT
is
consistent,
distributed.
I
Key-Value
stores
mainly
used
as
a
coordination,
separate
coordination
service
in
distributed
systems
designed
to
hold
small
amount
of
data
like
that
can
fit
entirely
in
memory,
although
we
still
write
to
disk
for
durability.
So
you
don't
want
to
store,
like
all
of
your
applications,
data
in
a
CD,
and
it
is
quite
popular
like
Cuban
atheists
relies
on
it
and
then
we
also
a
lot
of
non
kubernetes
use
cases.
I
So
this
is
how
SCD
works
in
terms
of
kubernetes,
so
keep
an
atheist
control.
Plane
interact
with
a
city,
for
instance,
Cuba
native
API
server
persist
or
cluster
mated
metadata
in
sed,
and
then
the
cubelet
or
node
agent
can
subscribe
to
this
information
through
a
CD
and
whenever
changes
happen
in
a
city,
a
city
notifies
the
client
which
is
2080s
API
server,
so
that
he
can
keep
the
data
up-to-date
and
next
slide.
Please
and
the
essay
is
distributed
for
high
availability,
why
we
prioritize
consistency
and
then
partition
tolerance?
What
the
means
sed
provides.
I
One
logical
cluster
view
of
many
physical
servers,
so
long
as
the
column
is
up
as
city
continues
to
work
even
under
machine
failures.
So
this
redundancy
provide
a
suite
for
tolerance
next
slide
so
SCD.
This
is
our
API,
so
a
city
has
a
flat
binary
key
space
with
no
directory
hierarchy,
so
a
city
uses
ranges
to
search
for
keys.
You
know
interval,
so
this
interval
moral
support,
coding,
keys
on
prefixes
as
if
from
our
directory,
and
then
SCD
list
is
not
tied
to
any
session
or
connection.
You
can
create
as
many
lists
as
you
want.
I
Instead
of
key
having
a
TTL
or
list
with
it
here
is
attached
to
the
key
and
then
when
the
lease
expires
or
associated
keys
in
s3
storage
are
deleted.
Also,
this
moral
videos
just
keep
a
lot
traffic
like
let's
say,
multiple
keys
are
associated
with
the
same
list
object
and
then,
when
people
are
crickets
are
multiplexed
over
a
single
jpc
stream
like
we
can
make
the
stream
like
multiplexing
and
stream,
like
the
broadcasting
more
like
efficient
and
in
addition,
they
are
also
processed
by
the
leader,
like
we
done
going
through
the
rough
layer.
I
So
we
don't
have
any
night
overhead,
the
consensus
overhead
when
we,
when
we
have
our
idling,
like
least
like
request
and
then
a
city,
can
also
civilized
multiple
operations
into
a
single
conditional
mini
transaction.
So
each
transaction
include
a
conjunction
of
conditioner
like
card,
so
we
can
check.
I
We
can
do
checks
on
key
version
and
the
modified
revision
or
like
value
of
the
key,
and
then
we
also
have
our
list
operations
to
apply
when
all
conditions
to
be
able
to
evaluate
to
true,
and
then
we
have
our
list
of
operations
to
apply
if
any
of
the
conditions
evaluate
to
force.
And
then
this
transaction
make
our
distributed,
locks
safe,
because
access
can
be
conditional
based
on
whether
the
client
still
holding
is
lock.
I
I
So
we
have
our
streaming
RPC
for
watch
or
nightly
skip
alive,
so
like
zookeeper
or
concert
or
a
CD
version
to
like
can
only
return,
one
event
or
watch
request,
and
then
they
require
long
polling
over
HTTP
and
then
forcing
the
assistance
basically
hold
open
a
TCP
connection
per
watch
request.
But
let's
say
you
have
a
thousands
of
watch.
Client
then
like.
We
can
quickly
use
up
orders
over
socket
and
the
memory
resources,
so
a
CD
version
3
like
instead
of
opening
our
new
connection
or
watch
request.
I
Please
so
sed
right,
distributed
consistent
log
over
raft
for
poor
durability,
and
then
this
underlying
storage
layer
is
right
or
hurt,
lock
network.
So
let's
say
clan,
send
a
write
request
to
the
sed
server
and
then
this
proposal
first
goes
to
leader
and
then,
when
the
proposal
has
been
agreed
by
the
quorum
or
cluster
like
leader
committed
entry
and
then
when
we
commit,
we
open
the
entry
to
the
war
file.
I
And
then
this
committed
log
entry
is
persistent
and
there,
when
we
say
persistent,
it
means
F
sink
down
to
the
disk
like
which
that
gives
durability.
And
then,
if
this
machine
crash
like
we
can
just
restart
the
server
and
then
the
server
can
just
replay
the
logs
back
from
disk
and
then
in
order
to
avoid
running
out
of
disk
space.
I
We
practice
into
small
files
like
critically
approaching
the
old
ones
so
for
and
then
for
performance
like
each
segment
on
file
is
Creole
located,
64
megabytes,
so
there
we
don't
have
any
latency
for
metadata,
update
or
allocating
blocks,
and
then
buffering
is
also
special
like
in
death,
like
writer.
Flush
is
only
on
first
sector
right
or
like
when
explicitly
asked,
and
then
a
city
flushes
Warlocks
to
disk
for
every
four
kilobytes
and
then
also
for
consistency.
We
keep
swirling
CRC
and
then
also
a
safe
against
the
writers.
I
So
small
list
right,
you
need
to
a
single
record.
Is
rough
entry
or
a
rough
hard
state,
and
then
this
each
record
follows
like
a
pipe
data
alignment.
So
let's
say
one
disk
sector
is
512
byte
and
then
work
record
is
a
thousand
like
22
byte.
Then
the
other
wor,
like
our
world
encoder,
add
like
two
pairing
bikes
at
the
end.
To
make
this
like
fully
sexual
mind.
So
assuming
they're
like
sector
disk
right
is
or
nothing
so
writer
would
never
straighter
like
on
record
across
disk
sectors
so
yeah.
I
This
is
how
we
prevent
write
tears
and
their
pressure
like
right
next
slide,
please
so
as
it
has
a
separate
back-end
database
because,
like
work
is
only
for
pending
roughed
entries
in
binary
format,
so
we
need
a
like
lice
nice
layer
on
top
in
order
to
represent
like
extra
key
value
data,
so
f,
CD
version
to
only
keeps
the
most
recent
key
value
mappings
and
discarding
the
order
projects.
However,
this
is
not
good
because,
like
the
watch,
client
may
miss
the
discarded
event
from
brief
network
disconnections,
so
to
avoid
it
is
unpredictable
window.
I
A
CD
version,
3
API
retains
the
history
core
key
regions
through
multi-version
concurrency
control
model.
So
this
retention
policy
for
this
history
that
can
be
complete,
so
I,
know
kubernetes
used
one
hour
and
then
typically
cluster
retains
the
superseded
key
data
for
hours
and
then
too,
reliable
and
the
longer
client
is
connection
not
just
transient,
like
network
destructions,
a
CD
version
3.
I
Each
right,
increment,
modified
revision
as
a
global
counter
and
then
in
memory
be
tree
index
is
like
each
key
to
this
religion
and
the
each
node
is
uniquely
identified
by
Turkey
and
then
contains
historical
revisions
on
disk
B
plus
tree
stores,
the
modified
revision
as
a
key
and
then
the
key
value
data
as
a
value
on
P
plus
tree
next
slide.
Please.
So
we
spend
as
much
time
to
implement
fault-tolerant
client
so
what
it
means
when
there
is
a
transient
disconnect
or
network
partitioned.
I
I
You
know
like
world
streaming,
RPC
and
then
sovereign
and
handle
store
transport
to
talk
to
other
peers
and
they
implement
the
hotel
layer
to
put
the
cluster
into
maintenance
mode
like
when
the
data
exceed
database
size
it
or
like
when
he
finds
the
data,
corruption
and
then
MVCC
layer,
the
implementer,
multi-version
concurrency
control
and
to
retain
the
historical
data
and
they
also
implement
watch,
storage
and
then
word.
Tb
is
an
embedded
cable
search
engine
that
acidity
uses
to
persist.
If
data
on
disk
and
they
roughed
layer
to
hinder
the
lock
replication.
I
I
We
spend
a
lot
of
time
for
testing,
as
it
has
very
limited
set
of
features,
so
reliability
is
our
highest
priority,
so
yeah
SAT,
functional
testing,
your
verify,
store,
correct,
behavior
of
SCD
under
simulated
system
failures
and
then
14
network,
and
it's
set
up
on
SAT
cluster
under
high
pressure
load
and
then
continuously
inject
a
leas
into
the
cluster
and
then
expecting
a
city
close
to
to
recover
within
a
few
second
now
this
has
been
extremely
helpful.
Trooper
for
us
to
find
critical
bugs
and
then
next
slide
yeah.
So
this
is
the
block
map.
I
I
Cranly,
a
city
or,
and
member
adawn
can
be
quite
disruptive.
So
let's
say
when
a
new
member
comes
in,
fill
a
cluster
and
then
sed
leader
has
to
replicate
all
the
logs
from
beginning
or
send
snapshot
to
the
new
member.
So
this
is
already
a
lot
of
work
for
the
sed
leader
node,
and
then
this
is
even
worse.
If
the
new
member
partitioned
were
being
slow,
it
can
affect
cluster
availabilities.
I
The
cluster
is
on
non-voting
member
like
before
disruptive
configuration,
changes,
happen
and
then,
in
that
case
of
leader,
still
replicate
that
logs
to
this
like
learner
node,
but
it
is
not
yet
counted
birthday,
our
forum,
so
once
this
new
server
has
caught
up
and
he
can
like
promote
it
as
a
regular
node
and
then
count
it
as
a
chrome,
but
why
dolana
node
is
catching
on
a
CD
does
not
need
to
wait
on
the
new
fresh
node
for
close
to
white
consensus.
So
this
is
one
of
our
feature.
I
We
want
to
add
in
the
next
release.
Icon
next
slide,
please
so,
yes,
avi
yeah,
so
we
want
to
propose
a
CD
as
a
CN
CF
project,
so
I
believe
like
seeing
safe,
can
benefit
a
city
for
a
lot
of
things.
So
right
now
we
have
our
shared
Google
Cloud
account
for
our
release
process
and
there
also
for
testing.
So
we
have
been
using
this
cluster
I
mean
we
have
been
using
this
Archon
since
the
early
days
of
Korus.
I
Well
now
it
is
not
clear
who
gets
the
paid
of
nepeta
beers
seems
like
Tim
is
like
distributed
across
different
companies
and
then
hopefully
you
can
get
a
better
CI
support
right
now.
We,
you
see
a
free
like
CI
service
and
then
once
we
grow
the
project,
we
may
need
more
resources,
buy
more
computing
power,
so
we
might
need
like
dedicated
jenkins,
the
cluster,
nothing
like
that
and
then
more
important
with
CN
CF.
We
want
to
grow
a
city
communities
and
then,
hopefully
more
consist
contributors
in
maintenance
yeah.
That's
it.
A
I
I
I
B
B
We
go
to
the
next
slide.
Please
Taylor.
Now
before
we
go
on
I'd
like
to
mention
that
I'm
extremely
keen
to
sponsor
this
project,
I
spent
a
long
time
doing,
some
DD
with
Ben,
hey
all
I
see,
is
on
the
call
and
a
team
from
Facebook
who
co-developed
it
with
pivotal.
As
believe
that's,
there
is
going
to
be
an
probably
an
initial
set
of
questions
around.
Where
exactly
does
this
fit
into
the
landscape?
B
J
Me
this
morning,
yep
great,
so
my
name
is
Ben
Hale
I'm,
here
I'm
a
longtime
spring
member
and
a
pivotal
employee
I,
have
with
me
Robert
Rozier,
formerly
of
Netflix
and
now
at
a
startup
called
Netta
Phi,
and
we
also
have
on
the
phone
Steve
curry,
formerly
of
Netflix
now
at
Facebook.
So
next
slide,
please.
J
So
the
the
our
socket
project
came
about
out
of
efforts
out
of
Netflix
to
sort
of
think
about
what
network
protocols
mean
in
the
concept
of
micro
services
and
coincidentally
in
the
last
year,
or
so.
The
the
spring
team
has
been
really
heavily
looking
at
reactive
programming
generally,
but,
more
importantly,
what
it
means
in
the
java
world
to
start
doing
micro
services
beyond
just
sort
of
your
first
or
second
micro
service,
and
while
we've
been
a
big
fan
of
sort
of
the
reactive
streams,
pull
push
back
pressure
programming
model.
J
J
All
of
a
sudden,
you
have
things
pushing
data
a
bit
faster
than
the
consumer
can
handle
it
or
the
consumer
is,
is
misbehaving
in
some
way
that
the
that
isn't
being
communicated
back
to
the
publisher,
and
so
we
we
sort
of
coincidentally
started
looking
at
what
protocols
might
be
available
to
us.
What
improvements
we
could
make
to
sort
of
take
this
programming
model
that
we
truly
believe
in
and
take
it
across
the
wire.
At
the
same
time,
some
of
the
staff
from
Netflix
start
or
sorry
from
Facebook
started
reaching
out
to
us
and
said:
hey.
J
We've
got
this
protocol
that
we're
using
internally
quite
a
bit.
Is
this
something
that
spring
team
would
like
to
be
involved
with?
And
we
said
absolutely
yes,
because
when
we
talk
about
the
our
socket
protocol,
we're
talking
about
a
protocol
that
answers
a
lot
of
questions
that
we
see
currently
in
modern
day
micro
service
design.
So
it's
a
it's
a
protocol,
that's
message
driven
primarily
rather
than
being
straight
RPC,
it's
asynchronous
and
it's
multiplexed
and
it
sort
of
hits
a
lot
of
the
high
points
that
straight
HTTP
doesn't
solve
straight
out
of
the
back.
J
One
of
the
other
sort
of
side
advantages
that
we've
been
a
real
fan
of
and
we're
starting
to
see
more
and
more
inside
of
our
customer
projects
is
there's
browser
support
for
this
protocol.
We'll
talk
a
little
bit
later
on
about
how
this
is
achieved
in
sort
of
standard
networks
as
well
as
I
said
before
it
supports
these
react
principles
from
the
reactive
manifesto.
Next
slide,
please
so
one
of
the
key
things
about
the
our
saket
protocol
is.
It
encapsulates
some
discrete
interaction
models
rather
than
being
really
really
generic
and
saying
build
it
yourself.
J
So
you
might
see
this
as
diagnostic
logs
or
something
like
that
or
or
you
know
stuff
that
isn't
absolutely
critical.
We
also
have
standard
request
response
that
you're
familiar
with
from
something
like
HTTP,
where
you
send
a
request,
and
you
expect
some
sort
of
confirmation
to
come
back.
Maybe
there's
payload,
maybe
there's
not,
but
you
do
get
a
positive
interaction
from
the
side
receiving
the
message
that
the
message
has
been
received
successfully.
J
J
So,
as
I
said,
the
the
connection
or
the
protocol
itself
is
bi-directional,
which
basically
means
as
soon
as
a
connection
is
established.
So
there
is
kind
of
the
concept
of
a
client
and
a
server
to
establish
a
connection,
but
once
the
the
pipe
is
is
connected
between
two
entities,
either
side
can
then
start
those
interaction
models
that
we
saw.
So
there
isn't
one
side
that
is
sort
of
disadvantage
to
the
other.
They
become
equal
members
of
the
network.
J
J
You
can
drop
the
connection
completely
potentially,
which
could
be
expensive
or
not
expensive,
depending
on
the
circumstances
you're
in,
but
the
server
is
going
to
attempt
to
do
a
potentially
very
large
amount
of
work,
and
you
have
no
interaction
and
no
influence
on
whether
or
not
that
actually
needs
to
get
done
by
the
end.
So
our
socket
builds
in
this
concept.
Choosing
of
cancellation
into
the
protocol
itself,
potentially
short-circuiting
very
expensive
operations
if
they
become
unnecessary
as
time
goes
on.
Another
really
really
important
feature.
That's
been
proven
out
quite
nicely
inside
a
Facebook.
J
Is
this
idea
of
resume
ability?
So
there's
a
inside
of
an
AR
socket
connection.
State
can
be
maintained
as
to
sort
of
a
given
session
the
data
that
has
been
transferred
across
that
session
and
successfully
received
by
the
other
side,
and
this
becomes
really
really
useful
in
a
protocol
because
it
means
say
if
you're
trying
to
transmit
data
from
a
data
center
into
a
mobile
device
or
something
and
that
person
on
that
mobile
device
is
walking
around
on
the
street,
eventually
walks
into
a
Starbucks
and
flips
over
to
Wi-Fi.
J
The
protocol
itself
supports
this
idea,
and
implementations
are
free
to
choose
how
exactly
this
works.
This
idea
that
you
have
a
session,
and
even
if
there
has
been
a
network
interruption,
you
can
then
resume
that
session,
where
you
were,
and
you're
only
responsible
for
any
messages
that
had
been
sent.
Since
the
last
message
you
had
that
had
been
successfully
assumed.
We
have
the
idea
of
application
flow
control
both
between
two
connected
peers.
J
They
they
are
handed
out
fixed
numbers
of
requests
that
they
are
allowed
to
maintain
sort
of
giving
a
client-side
load,
balance
and
kind
of
behavior,
and
then
finally,
we
also
support
the
idea
of
fragmentation
of
individual
frames,
as
data
is
sent,
especially
when
it's
something
that's
a
large
piece
of
data,
say
photos
or
a
video
or
something
like
that
to
help
networks.
It's
very
often
very
useful
to
be
able
to
fragment
those
payloads
next
slide.
J
Please,
so
are
those
are
sort
of
the
features
of
our
sock,
the
things
that
it
promises
to
do
for
you,
but
one
of
the
the
key
things
that
really
attracted
the
the
pivotal
team
is
this
idea
that
it's
really
really
flexible?
So
we
talk
about
this
as
a
protocol
and
really
it's
only
a
network
framing
protocol,
it's
completely
transparent
transport
agnostic,
so
it
can
be
routed
over
raw
TCP,
which
we
see
a
lot
of
people
doing.
If
you
only
have
access
to
HTTP
one,
you
can
do
it
with
WebSockets.
You
have
HTTP
2.
J
It
builds
very
nicely
on
that,
and
even
exotic
protocols,
such
as
err
on
UDP
protocol,
can
all
sort
of
benefit
from
the
our
socket
layer
sitting
on
top
of
it.
It's
also
payload
agnostic,
so
we'll
probably
see
a
lot
of
users
trying
to
send
protobufs
across
our
socket,
but
it's
absolutely
not
required
that
that
be
the
thing
you
send.
J
You
can
send
Jason
just
as
easily
maybe
you're
a
company
that
has
your
own
custom
binary
payload,
because
our
socket
is
just
a
framing
protocol,
allows
you
to
put
any
bag
of
bytes
you
want
to
inside
of
the
payloads
that
are
sent.
We
also
liked
that
it's
very
much
programming
model
agnostic
inside
the
spring
team.
We
are
very
big
fans
of
the
messaging
sort
of
abstraction
that
you
fire
off
a
message
and
it
can
be
routed
to
some
arbitrary
piece
of
code
on
the
other
side
based
on
some
routing
tag
attached
to
it.
J
So
there
are
really
powerful
implementations
of
both
Java
and
C++,
but
the
Java
scripts
coming
on,
but
we
also
see
them
for
things
like
Kotlin
and
you
can
envision
places
where
something
somebody
could
implement
it
with
a
Python
or
you
know
ago,
or
something
like
that.
You
name
your
your
favorite
language.
So
if
we
take
a
look
at
the
next
slide,
this
is
sort
of
graphical
representation
of
it.
J
Please,
okay,
so
there's
a
the
next
couple
of
slides
are
just
sort
of
a
comparison
between
G
RPC
Nats
and
our
socket,
which
are
sort
of
the
closest
possible
competitors
or
closest
possible
analogs.
To
something
like
our
socket
and
I,
don't
think
we
need
to
go
through
all
these.
We
put
them
in
here
as
reference
for
you
to
take
a
look
at
them,
but
the
key
thing
is
that
in
general
our
socket
aims
to
be
this
sort
of
layer
in
the
middle.
J
So
a
lot
of
things
that
you'll
see
G,
RPC
or
Nats
potentially
do
it
will
do
sort
of
built-in
so,
whether
it's
something
like
cancellations
that
Nats
doesn't
have
but
G
RPC
does
in
a
limited
way.
We
can
it's
sort
of
built
first-class
into
the
protocol.
It's
a
full
duplex
protocol,
as
we
described
earlier,
where,
once
a
connection
has
been
established,
either
side
can
initiate
interactions
back
and
forth
next
slide.
Please
we
have,
as
we
described
a
little
bit
before
the
idea
fire-and-forget
for
lossy
kind
of
things.
J
We
have
resumed
ability
built-in,
as
a
first-class
citizen,
there's
flow
control
based
on
this
reactor
streams
protocol,
this
sort
of
well-proven
out,
especially
in
the
java
world,
but
starting
to
to
get
to
alternate
locations
or
sorry
alternate
programming
languages.
Well
next
slide,
please,
and
then
this
is
sort
of
encapsulating
what
we
described
a
little
bit
before,
like
the
various
different
language
languages
and
frameworks
that
support
these
things
today,
and
certainly
our
socket
we'll
talk
about
this.
B
J
Okay
next
slide,
please
so
our
socket
today
there's
over
600
github
stars
between
sort
of
the
the
big
projects
and
inside
of
it
that
we
consider
is
a
Java
C++
Kotlin
and
the
main
stack
itself
and
the
contributors,
as
we
said
a
little
bit
before
Facebook
and
Netflix
are
sort
of
the
top
level
high
visibility,
contributors
and
users
of
this
protocol,
but
spring
and
project
reactor.
Our
reactor
streams,
implementation
inside
the
team
at
protocol
or
sorry
at
that
pivotal
are
very,
very
big
on
it
as
well.
J
It's
something
that's
a
big
tent
pole
for
the
next
year
for
us
and
then
finally,
I'd
be
remiss
if
I
didn't
mention
the
Netta
fighty
former
Netflix
sirs,
who
have
gone
and
are
working
on,
had
built
an
entire
company
around
this
protocol
and
what
it
can
bring
to
enterprises
and
next
slide.
Please
final
slide
here
we
really
would
like
to
get
into
the
CN
CF,
because
we
do
have
three
very
large
companies
currently
invested
in
this
very
heavily
and
one
of
the
the
key
things
that
we
observe
when
we
first
started,
couldn't
contributing
and
collaborating.
J
Is
there
is
a
huge
interest
in
having
a
neutral
third
party
to
do
this
kind
of
work?
We
want
to
make
sure
that
the
CN
CF
is
a
place
where
we
can
all
go
in
there,
isn't
a
ton
of
politics
going
on
between
the
three
different
teams,
not
that
we
think
there
is
generally,
but
it
is
nice
to
have
that
sort
of
neutral
third
party
to
to
help
with
this,
so
our
socket
itself
is
ideal
for
managing
or
for
connecting
microservices
themselves
and
obviously
CN
CF
is
a
great
place
to
start
talking
about
that.
J
A
lot
of
micro
service
pay
workloads
are
going
to
be
going
on
to
CN
CF
projects
like
kubernetes,
and
things
like
that.
So
we
think
that
this
is
a
great
place
to
sort
of
start
standardizing
a
protocol
that
can
help
help
those
kinds
of
applications
have
it
close
by.
We
want
to
expand
the
our
socket
community
beyond
our
sort
of
Java
and
C++
strongholds.
Obviously,
kubernetes
is
so
polyglot
these
days.
It
is
a
great,
a
great
place
for
us
to
get
in
front
of
a
bunch
of
different
language.
J
And
finally,
we
want
to
integrate
with
the
other
CN
CF
projects
where
we
can,
because
there
are
a
lot
of
advantages
to
our
socket
over
something
like
straight
HTTP
or
even
HTTP
to,
and
so
we
want
to
make
sure
that
that
the
the
advantages
that
our
socket
actually
brings
to
the
table
can
be
used
by
various
components
inside
of
the
CN
CF
and
inside
of
sort
of
the
kubernetes
ecosystem.
It's
not
just
a
one-way
street,
where
our
sockets
getting
a
lot
of
help
from
being
in
the
CN
CF.
J
K
The
reactive
streams
in
Argus
travel,
arcs
Java
is
the
number
two
number
one
star
Java
projects
on
github,
it's
very
popular
in
Android
and,
like
Ben,
said
one
of
the
big
problem,
for
it
is
once
you
get
to
the
network.
There's
this
very
clunky
set
of
abstractions
like
circuit
breakers
and
whatnot.
This
basically
fulfills
a
huge,
huge
problem
that
people
own
there,
so
that's
kind
of
the
the
tie-in
for
the
community.
People
are
interested
in
it.
J
So
those
are
certainly
the
the
big
four
contributors
as
they
extend
now,
but
I
believe
you
know
if
you
counted
them
out,
there's
a
bunch
of
individual
contributors.
I
know
we
have
a
fair
number
of
sort
of
small
consultancies
in
Europe
who
have
done
significant
contributions
both
to
spec
improvements
and
to
Java
implementation
to
the
Java
implementation
as
well.
Excellent.
J
Chris
asked
in
the
chat:
there
is
a
speck,
but
what's
considered
the
reference
implementation
I'd
say
the
reference
implementation
today
is
probably
the
Java
one,
but
the
C++
one
is
also
very
very
close.
Neither
of
them
are
100%
set
implemented,
but
we
are
working
very
hard,
certainly
on
the
Java
side,
to
get
there.
Okay,.
E
K
Is
a
good
question
so
there's
many
times
in
a
distributed
system
where
you
have
plenty
of
network
bandwidth,
but
in
it,
but
your
application
receives
an
expensive
call
right.
So
let's
say
you
have
a
large
payload
like
a
Meg
and
you
can
just
rip
through
them
all
day
long
and
you
can
use
the
TCP
buffer
to
stop
your
system
from
being
overwhelmed.
K
So
if
you
had
a
a
chain
of
services,
the
flow
control
actually
will
be
propagated
through
the
chain.
So
if
you
had
three
services,
think
of
a
B
and
C
right
and
if
C
is
slowing
as
C
as
a
slow
service
right,
but
B
has
plenty
of
network
bandwidth.
It
can
get
overwhelmed
very
quickly
right,
but
with
application
level
flow
control,
it
can
actually
propagate
up
to
the
caller
a
to
have
it
slowed
down,
prevent
thundering
herd's.
B
Guys
we're
out
of
time
and
people
just
want
to
say
for
me,
the
us
piece
of
this
are
distributed.
Rx
Java
good
support
the
streams
and,
in
particular,
this
federated
flow
control,
which
I
think
the
use
case,
the
mentioned
when
we
first
spoke,
but
you
didn't
bring
up.
Today's
you've
got
mobile
phones
connected
to
the
backend
yep.