►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Intro: CNCF Serverless WG / CloudEvents - Doug Davis, IBM & Cathy Zhang, Huawei
Provide an introduction to the CNCF Serverless Working Group, the CloudEvents specification and the new Workflow sub-group. More details later...
To learn more click here: https://sched.co/FuL5
A
B
A
Right
I'll
present
the
first
half
and
then
I'll
hand
it
over
to
Kathy.
So
what
we're
gonna
talk
about
here
today
is
the
service
working
group
within
the
CNC
f,
as
well
as
the
cloud
events
of
project,
which
is
a
spinoff
from
the
service
working
group.
This
session
is
strictly
a
high-level
overview
of
the
topics.
We
do
have
another
deep
dive
session
tomorrow.
I,
don't
know
we'll
show
you
the
time
later
in
the
slides,
but
we're
not
going
to
die
too
deep
here.
A
Just
give
you
a
cursory
introduction
to
what's
going
on
in
these
various
working
groups,
then
Kathy
and
then
Clemens.
Another
member
of
the
working
group
will
go
deeper
dive
tomorrow.
Just
to
give
you
a
heads
up
all
right,
so
here's
a
quick
agenda
go
over
the
service
working
group
cloud
events
a
little
bit
of
what
we're
doing
there
in
terms
of
other
answered
pieces
of
work
like
SDKs
work
flowing
that
kind
of
stuff,
and
then
time
permitting,
we
might
have
a
demo
for
you,
but
we'll
see
how
that
goes.
A
You
know
who's
using
it.
Is
it
becoming
popular?
How
are
people
using
it?
Where
do
the
values
cases
stuff
like
that,
just
like
I
said
just
provide
an
overview
of
the
technology
and
then,
finally,
we
ended
with
some
recommendations
of
what
the
CNC
F
might
do
going
forward,
and
some
of
the
recommendations
might
have
been.
Do
nothing
well
talk
about
what
those
are
in
a
sec.
The
other
thing
we
produced
was
a
landscape
which
is
pretty
much
just
a
statement
of
what's
out
there
and
ecosystem
today.
What
are
the
open
source
projects
out
there?
A
What
are
the
proprietary
projects
out
there?
What
kind
of
services
hook
into
service
frameworks
or
function
to
service
those
kind
of
things,
and
so
basically
just
give
a
lay
of
the
land
related
to
service
and
functions
of
the
service
technology?
Now
I
mentioned
earlier,
we
had
a
set
of
recommendations
for
the
CNC
F
or
the
TOC
in
the
CNC
F.
Well,
one
of
the
recommendations
was
to
look
at
something
that
we
can
look
at
from
a
standardization
perspective
right.
What
may
there
be
agreements
with
in
the
community
on
that?
A
We
can
all
come
together
to
produce
a
single
form
of
something,
rather
than
trying
to
continue
to
have
different
sort
of
syntax
views
of
the
same
kind
of
thing
for
the
most
part,
and
what
we
came
up
with
was
events.
We
realized
that
functions
is
service
and
service
is
really,
for
the
most
part,
a
lot
of
cases
about
inventing.
So
can
we
find
a
way
to
sort
of
standardize
the
events
themselves
coming
into
the
system,
so
people
can
stop
fighting
about
something?
A
A
But
that's
another
piece
of
offshoot
of
says:
can
we
look
at
sort
of
a
standardization
in
this
space
because
some
people
want
to
get
involved
in
this,
but
they
don't
want
to
have
to
reinvent
the
wheel
or
redo
their
tech
or
redo
their
applications
in
that
space,
just
we're
going
to
go
do
different
platform.
So
can
we
standardize
that
as
well
and
talk
with
Kathy
we'll
talk
more
about
the
workflow
in
a
minute?
But
for
now
let's
go
and
talk
about
cloud
events,
so
they
said
what
we're
looking
to
do
is
to
standardize
cloud
events.
A
Now
in
the
past,
there
have
been
many
many
efforts
around
standardizing
events.
There
are
a
common
event
formats
and
stuff
like
that,
and
those
try
to
be
very
ambitious.
They
really
try
to
standardize
the
entire
event
structure
down
to
every
individual
little
property
of
the
data
model
related
to
the
business
logic,
and
they
had
limited
success.
Put
it
that
way.
We
decide
to
take
a
different
approach.
We
were
going
to
try
to
standardize
the
entire
cloud
event
itself.
A
Rather,
what
we
want
to
do
is
see
whether
there
was
common
bits
of
metadata
that
we
can
extract
from
all
events
and
say:
let's
agree
on
those.
So,
for
example,
if
you
look
at
this
example
cloud
about
here,
you'll
see
things
like
source
ID
time
and
content
type.
Those
are
describing
the
event
itself
and
almost
every
single
event
will
always
have
at
least
those
kind
of
properties
right,
that's
what
we
were
shooting
for
the
base
minimum
set
of
properties
that
pretty
much
every
single
event
is
going
to
want
to
have
in
its
message.
A
Everything
else
is
going
to
be
reserved
for
data.
That's
where
the
the
business
logic
payload
is
going
to
go
and
that
could
be
freeform.
It
could
be
anything
you
want,
as
defined
by
the
content
type
okay.
So
if
you
look
at
this,
what
you're
actually
going
to
notice
is
this
is
actually
very
similar
to
something
you
guys
should
all
be
very
familiar
with,
which
is
HTTP
right.
If
you
think
about
HTTP,
what
did
they
do?
A
Okay,
so
which
is
common
events
are
coming
properties
that
are
used
to
help
get
the
message
from
point
A
to
point
B.
Anything
else
related
to
the
business
logic
is
out
of
scope
for
the
cloud
event
specification.
Okay,
so
we
define
consistent
set
of
metadata,
a
core
specification
that
defines
those
meta
metadata,
and
then
we
have
some
transport
specifications
which
says:
how
do
you
serialize
those
common
properties
in
different
formats
in
JSON
over
HTTP
over
MQTT
those
types
of
things?
Okay?
A
So
if
you
think
about
what
we
actually
produced,
it's
actually
very
very
minimal,
because
we
want
to
try
to
make
this
as
useful
as
possible
in
as
many
cases
as
possible,
so
it
couldn't
be
too
prescriptive,
but
in
the
same
way,
HTTP
is
widely
used
with,
in
my
opinion,
a
minimal
set
of
definitions.
We're
following
that
exact
same
pattern
here
and
I'm
not
going
to
go
into
the
exact
properties
because
they're
all
relatively
obvious
for
the
most
part
you
to
read
the
spec,
but
keep
in
mind.
This
actually
is
a
complete
cloud
event.
A
A
Can
we
normalize
the
event
of
properties
down
to
the
bare
minimum,
to
help
message
get
from
one
point
to
another?
Again:
it's
about
interoperability,
so
it's
about
facilitating
integration
across
platforms
and
so,
most
importantly,
stay
out
of
the
business
of
messing
with
people's
business
logic.
Right,
because
if
you
go
back
through
your
slide,
we
don't
touch
data
data
can
be
anything
you
want
in
this
example,
it's
sort
of
jasonich.
It
can
be
anything
you
want
in
there
at
all.
It
doesn't
have
to
be
even
Jason
any
kind
of
payload.
A
We
stay
out
of
that
line
of
work
right.
It's
ours
is
just
the
bare
minimum
properties
and
again,
this
is
all
about
taking
that
first
tiny
step
towards
defining
interoperability
for
functions
in
general.
Just
events
was
the
first
step
in
that
process.
We
may
look
at
other
things
in
the
future,
like,
for
example,
that
is
going
to
talk
about
workflow.
That's
another
thing,
we've
sort
of
picked
up,
but
there
are
more
things
that
were
been
thinking
about.
This
is
just
the
first
step
in
that
process.
So
what
are
we
actually
delivering?
A
A
So
you
can
read
this
picture
selves,
but
we
chose
some
of
the
more
popular
transports
out
there
to
try
to
get
as
much
adoption
as
possible
and
finally,
we
have
a
primer
which
is
a
non
normative
document
which
just
describes
the
design
philosophy
some
of
the
design
decisions
in
terms
of
why
we
did
what
we
did
just
to
give
you
some
background
in
terms
of
what
we're
doing.
Why
and
and
some
reasoning
behind
some
of
the
decisions,
because
some
of
them
might
be
a
little
controversial,
but
the
primer
will
help.
A
You
understand
why
we've
made
the
decisions
that
we
did.
Okay
now
another
piece
of
work
that
we're
doing-
and
this
is
really
there
to
not
only
aid
in
adoption
but
to
help
make
sure
that
what
we're
producing
actually
is
usable,
because
what
you
know,
people
actually
use
this
in
the
real
world
we're
producing
a
set
of
SDKs.
Basically,
you
can
think
of.
These
SDKs
is
really
aids
in
the
serialization
and
deserialization
of
cloud
events.
A
So,
for
example,
in
the
Golan
case,
if
I'm
writing
a
go
program,
I'll
use
the
goaline
SDK
give
the
SDK
my
cloud
events
tell
the
senator
of
HCP
and
it
will
do
all
the
magic
for
me
hopefully
over
to
Senator
or
HCP.
This
is
still
very
much
in
naive,
an
alpha
form,
yet
they
just
kind
of
started
the
process.
We
are
hoping
that
maybe
by
coop
console
some
of
these,
like
the
go
one
or
Java
one,
maybe
in
an
alpha
stage
by
then,
but
it's
still
very
much
a
work
in
progress.
A
Okay,
you
can
see
the
languages
that
we
were
shooting
for.
The
first
three
go
Java
and
JavaScript
haven't
started
the
other
two
sea
serpent
Python
had
yet
to
be
started,
but
there
are
rumors
that
they
will
start
very
very
soon,
so
expect
those
to
come
out
if
you
guys
are
interested
in
participating
and
helping
prove
out
the
spec.
This
would
be
a
great
area
for
you
guys
to
help
out
in
because
this
is
one
area
that
we
really
need.
A
B
Think
some
of
you
already
know
that
we
are,
they
released
cloud
events,
version
0.1
in
April
of
2018
and
the
next
version
0.2
will
come
soon
and
after
that,
we're
going
to
work
on
the
cloud
events
version
1.0
1.0.
We
are
going
to
finalize
the
core
event
attributes,
which
is
a
metadata
on
Doc
mentioned
before,
and
we're
also
going
to
finalize
the
set
of
particles
and
the
sterilization.
B
Mappings
I
think
the
docs
mentioned
some
of
those
too,
and
we're
going
to
produce
some
documentation
so
to
help
you,
you
know,
understand
this
and
use
it
such
as
developer,
guide
user
guide
and
then
work
to
organize
several
interrupt
amounts
to
show
how
you
can
use
these
cloud
events
and
then
to
see
how
they
work.
I
mean
different.
Implementations
can
work
together
through
this
standard
specification.
B
You
know
most
of
cerberus
applications
are
not
a
simple
function,
triggered
by
a
simple
event:
I
think.
Probably
most
people
are
familiar
with
like
lambda
right.
It's
just
a
one
event
trigger
function.
Most
cerberus
applications
are
composed
of
multiple
steps
of
functional
execution
at
each
step.
Different
events
can
trigger
different
functions
and
different
steps.
B
So
a
function,
workflow
is
actually
a
function,
execution
flow,
that's
what
we
mean
by
function.
Workflow
and
this
diagram
shows
example,
function
workflow.
We
can
see
that
it
involve.
It
involves
multiple
steps
and
each
step
is
associated
with
events
and
function,
and
it
could
be
associated
with
multiple
functions
too,
like
the
step
3.
B
B
Logic
and
all
the
other
work
is
handed
over
to
the
service
platform
such
as
you
know,
monitor
the
event,
the
event
triggers
and
decide
deciding
like
how
many
containers
or
other
computer
resources
to
schedule
to
host
and
run
those
functions
and
how
to
manage
each
step
of
the
workflow,
how
to
pass
the
information
from
one
step
to
the
next
step
or
how
to
filter
the
information.
During
this
you
know,
information
passing
process
and
also
if
the
function
returns
are
Ferriero
how
to
do
the
retry.
B
So
we
need
a
standard
specification
for
the
different
application
developers
to
specify
the
workflow
template
and
that
workflow
template
can
be
used
by
different
cloud
service
platforms.
If
we
do
not
have
such
a
standard
specification,
then
the
application
developer
has,
to
you
know,
write
many,
maybe
many
different
types
of
in
a
workflow
specification
or
flow
templates.
B
So
our
goal
of
the
workflow
specific
for
white,
a
standard
way
for
the
user
to
specify
the
function
workflow
for
their
service
application
so
as
to
office
facilitate
portability
of
your
application
across
different
vendors
platforms.
I
want
to
clarify
that
it
is
not.
As
the
Wolfram
specification
is
not
a
specification
for
function.
Signature
such
as
you
know
specify
what
kind
of
language
this
function
is
using
is
written
was
memory
size
for
the
function.
So
it's
not
for
that
that
we
need
a
separate,
a
spec
for
that.
B
It
is
also
not
a
spec
for
specifying
the
detail.
Event.
Information
like
the
event
format
or
metadata
definition,
that's
specified
by
the
crowd
events
specification.
So
indeed
how
specifically
the
special
the
workflow
specification
scope
includes
the
following.
It
provides
primitives
for
the
app
developer,
two
steps
to
specify
the
steps
or
you
can
call
States
involved
in
a
service
application
workflow.
It
provides
primitives
further
a
developer
to
specify
what
event
will
trigger
what
function
in
the
workflow.
It
also
provides
primitives
for
the
app
developer
to
specify
the
functions
in
each
step.
B
How
many
functions
you
know
how
those
functions
should
be
executed.
It
also
provides
an
app
developer,
those
primitives
to
specify
how
information
should
be
passed
or
filtered
from
one
function
to
the
next
function
or
from
one
step
to
the
next
step.
It
also
provides
primitives
for
the
app
developer
to
specify
in
order
to
try
the
private
server.
This
province
should
do
if
an
error
happens.
B
So
this
slide
shows
some
information
about
the
workflow
specification.
We
have
output
the
first
draft
of
the
specification,
but
this
is
just
a
draft,
so
we
welcome
you
know
more
people
to
join
this
workgroup
to
run
the
effort,
a
work,
this
workflow
specification.
If
you
have
any
questions
or
you
feel,
there's
a
issue-
you're
welcome
to
go
to
the
github
and
pour
create
a
pull
request
for
the
issue.
Of
course,
you're
you're
all
welcome
to
contribute
to
the
spec.
B
A
B
A
It
Copenhagen
I,
see
I,
think
it
was
seen
see
if
Copenhagen,
what
is
Austin's
last
name,
you
remember,
Austin
Collins
from
service
comm
did
a
really
wonderful
demo,
the
link
to
its
on
YouTube
and
what
he
did
he'd
be
well
Austin
presented.
The
demo
was
it
was
a
group
collaboration.
Basically,
what
it
was
is
an
event
producer.
So
in
this
case
we
had
two
different
blob
stores,
AWS
integer.
They
would
produce
an
event
and
each
one
would
go
through
different
gateway
and
then
each
of
the
various
of
service
functions
out
there.
A
You
know
you
got
it.
Basically,
every
single
company,
all
the
major
players
there,
IBM
Google
Microsoft,
Huawei,
Red,
Hat,
all
major
players.
You
could
see
on
there
all
ended
up
receiving
these
events
and
what
they
ended
up
doing
was
taking
the
events
in
and
the
event
was
basically
the
notification
of
a
new
image
being
uploaded
into
the
blobstore
and
they're
all
being
sent
as
cloud
events
right.
A
So
these
notifications
or
events
are
being
sent
out
as
cloud
events
but
through
the
Gateway
to
the
service
function,
which
would
then
process
the
event
to
make
sure
everybody
could
process
a
cloud
event
successfully
right
make
sure
it
gets
that
interoperability
statement
and
then
to
prove
that
they
process
the
correctly.
They
would
then
tweet
something
to
Twitter,
obviously,
either
a
message
about
the
image.
So
in
some
cases
they
would
do
some
sort
of
visual
recognition
to
try
to
guess.
A
What's
in
the
image
and
other
cases,
they
would
actually
modify
the
image
then
upload
the
picture
to
Twitter.
They
had
some
fun
with
it.
So
when
ended
up
happening,
was
I
Twitter?
What
you'd
see
a
couple
different
things
like,
for
example,
the
IBM
case?
You
know
it's
a
Watson
thinks
it's
a
picture
of
a
cargo
door.
Okay
did
a
pretty
good
job
there,
but
then
you
have
other
people
like
Oracle
who
really
had
some
fun
with
it
and
we
uploaded
a
picture
of
Dan
Conn.
A
You
know
CEO
CTO
or
direct
no
executive
director
of
CN
CF,
and
they
would
end
up
doing
it,
sticking
something
and
smoking
and
then
glasses
on
him
right.
So
people
had
some
fun
with
this,
but
the
point
of
this
was
to
again
showcase
Interop
ability
of
cloud
events
across
all
these
room
platforms
to
make
sure
everybody
could
handle
it
properly
and
it
was
actually
starting
to
prove
that
cloud
events
actually
are
useful
from
an
interoperability
perspective.
A
Now,
since
I
think
we
have
time
what
I
wanted
to
do
was
show
Q
show
you
guys,
oh
five
down,
here's
a
link
to
the
Twitter
feeds
actually
I
think
it's
still
up
there.
You
guys
should
look
at
the
Twitter
feed
of
all
the
old
images
we
put
up
there
and
see
some
of
the
fun
things,
because
some
of
the
visual
recognition
things
are
really
humorous
in
terms
of
what
some
of
the
AI
machinery
thought
was
going
on
in
the
picture
now.
A
This
is
just
a
preview
of
what
we're
gonna
actually
be
doing
for
real
in
coop
con
Seattle,
but
I
want
to
showcase
some
of
the
work
that
people
have
been
doing
again.
This
is
all
about
interoperability
of
cloud
events,
so
in
this
particular
case
for
those
who
aren't
familiar,
there's
a
a
game
called
mad
libs
or
yeah.
It's
a
game,
I
guess
where
someone
really
comes
up
with
a
sentence
and
they
replace
words
in
it
like
nouns
and
verbs
of
stuff.
A
They
leave
them
blank
and
they
asked
somebody
to
give
them
a
random
noun
verb
adjective
and
they
fill
it
into
the
sentence
and
because
the
user
doesn't
know
what
that
or
because
the
person
doesn't
know
what
that
sentence.
Is
they
come
up
with
some
really
funny
sentences?
So
what
we
did
here
is
we
had
a
variation
of
that
in
the
middle.
A
We
have
a
cloud
event
controller,
which
would
generate
an
event
saying
it
found
a
blank
in
a
sentence
and
the
blank
is
supposed
to
be
filled
in
with
a
verb
or
a
noun
or
an
adjective,
or
something
like
that,
and
it
would
send
out
that
event
to
these
various
subscribed
to
those
events
from
the
cloud
oven
controller.
Those
nodes
would
then
generate
a
random
word,
fitting
that
category
noun
verb
adjective
and
then
would
generate
another
event.
A
That
would
go
back
to
the
Ottoman
control
if
you
subscribe
to
those
guys
to
see
what
words
they
produced
in
this
particular
case,
the
control
wait
up,
I,
think
five
seconds
or
so
to
see
what
events
came
back
in
before
I
start
spilling
in
the
sentence
with
a
random
word
from
everybody
that
actually
generated
in
events.
So
hopefully
this
will
work.
A
A
So
you'll
see
the
events
go
out
and
notice.
The
tiny
little
funky
because
what's
happening
is
some
cloud
event.
Infrastructures
actually
have
to
do
have
a
little
bit
of
a
delayed
time
when
they
first
start
up.
But
if
you
run
it
again
this
time
it
should
go
faster
because
they
didn't
have
that
initial
delay
for
start-up
time.
So
this
goes
into
the
whole
notion
of
how
service
frameworks
work
right.
Some
of
them
are
very
quick
because
they
have
a
very
quick
start
times
are
little
slower
for
that
initial
time.
A
But
then
you
guys
see
what
happened
here
right.
You
got
a
Ares,
every
node
generated
a
set
of
words
and
then
the
controller
randomly
picked
one.
So,
for
example,
it
picked
John
from
VMware
dispatch.
It
picked
can't
read
that
in
read,
proud,
oh
proud
from
the
IBM
node
and
then
went
back
to
here
picked
equipment,
so
you
get
a
kind
of
a
funny
sentence
up
there
now.
The
other
interesting
thing
is:
where
is
my
cursor?
A
A
I
know
you
guys
in
bathroom
can't
see,
but
the
cloud
event
properties
are
all
HTTP
headers,
all
prefixed
with
CEC,
never
cloud
event,
but
in
this
particular
case
the
event
that
he
generated
that
the
climate
controller
subscribed
to
this
is
the
structured
format,
meaning
the
cloud
event
is
an
HT
body
with
the
data
being
the
astronaut
the
application
fit
logic,
which
is
the
word
that
he
generated
so
again.
This
shows
the
two
different
formats
for
HTTP,
binary,
structured,
proving
in
our
building
between
two
different
endpoints
for
sending
and
receiving
cloud
events.
A
Okay,
so
anyway,
just
a
show
just
a
demo
of
what
we're
planning
on
showing
for
real
and
in
pucon
seattle
later
this
hour,
I
guess
next
month
so
with
that
a
little
bit
early,
but
we
are
taking
out
of
time.
What
I
want
to
do
is
give
you
some
links
for
more
information
service
working
group.
If
you're
interested
in
that
technically,
the
service
working
group
actually
isn't
doing
a
whole
lot
right
now,
because
their
work
is
pretty
much
done.
They
probably
produce
the
white
paper,
the
landscape
and
stuff,
like
that.
A
We
are
still
working
on
the
workflow
project
there,
but
it's
almost
a
subgroup
under
service,
but
service
itself
is
kind
of
finished
as
it
right
now,
but
workflow
is
just
starting
to
pick
up
so
that
that
work
is
still
going
on.
It's
got
you
mentioning
the
cloud
events
stuff.
It
is
a
separate
project.
It's
actually
a
sandbox
project
now
understand
CAF
with
some
links
there
to
the
main
site,
the
repo
and
the
org
and
stuff
like
that.
A
You
can
see
and
a
link
the
SDKs
it's
under
the
cloud
of
an
Oregon
or
github
of
an
SDK
and
then
the
language
now
a
little
bit
of
an
advertisement.
So
tomorrow,
as
I
mentioned,
Clemens
and
Kathy
will
be
doing
a
deep
dive
on
cloud
events.
So
now
we
basically
just
barely
touched
the
surface
on
this
one,
but
they're
gonna
go
deeper
on
it,
explain
some
more
the
rationale:
you
know
a
venting
versus
messaging.
What
the
differences
and
stuff
like
that,
so
that's
tomorrow,
at
what
is
that
3:05,
but
then
Kathy
also
has
a
session.
A
That's
gonna
go
deeper
into
the
workflow
stuff
at
12:15,
so
you
might,
if
you're
interested
workflow,
definitely
try
to
thank
Kathy
sessions
more
on
that
okay,
unfortunately,
I
didn't
include
the
room
for
Kathy's
I
apologize.
You
got
a
look
at
the
schedule
for
that,
but
it's
at
12:15
and
with
that
I
think
we
were
basically
done
and
we
have
time
for
questions.
A
C
B
So
in
the
function,
so
we
leave
the
function,
detail
definition
in
the
function,
signature
definition
an
in
function,
workflow
we
only
reference
as
a
function
has
unique
ID.
That's
it
you
know,
and
then
we're
going
to
retrieve
that
unique
from
that
unique
ID.
Of
course,
the
back
end
information
right,
you're
going
to
retrieve
the
idea,
and
then
you
know
what
kind
of
you
know
what
language
was
a
memory
requirement?
Oh
that
all
those
informations,
but
the
purpose
of
the
is.
We
do
not
want
the
workflow
to
dive
into
the
detail
of
the
function
definition.
B
A
D
Thank
you
for
the
talk,
so
this
is
also
under
the
ACNC,
but
there's
another
one,
Kentucky
native,
also
under
this
one,
how
you
relate
these
two
projects.
B
So
committee
is
an
open
source
party
right,
I
think
you
know
we
have
inglis
yeah
from
google
he's
worth
young
Canadians.
Also.
He
also
joins
this
workgroup
I
think
there
will
be
our
cooperation.
So
that's
an
open-source
party.
This
is
more
currently
and
this
currently
we're
defining
a
standard.
So
I
think
the
Canadians
will
look
at
you
know
the
stud
defined
in
this
workgroup
and
see.
B
B
A
D
Excuse
me
just
follow
up
a
living
that
Mike
so
unqualified.
That
I
worry
is
that
when
earlier
dark
talk
about
this
is
a
this
working
group
is
I.
Think
I
think
you
claimed
that
other
than
a
sub
grouping
workflow
every
other
work
is
done
or
as
a
candidate.
It's
just
getting
started
its
own
kind
of
a
right.
A
So
keep
in
mind
that's
right!
Now
we
obviously
looking
at
events
workflows
another
one.
When
we
first
started,
we
were
just
doing
events
right:
workflow
came
up
when
we
as
we
realized
very
sort
of
nearing,
maybe
the
midpoint
for
cloud
events.
We
realized.
Okay,
maybe
we
start
looking
a
little
more
forward.
That's
when
we
had
a
discussion,
took
a
vote
and
said:
okay
workflows
the
next
one
we
want
to
start
looking
at
I
expect
there
to
be
other
things
popping
up
on
our
radar
to
look
at
possible.
Standardization.
A
Don't
know
for
sure
what
they're
gonna
be
chances
are
there
obviously
will
be
in
the
server
list
function.
Space
may
be
function.
Signatures
Kathy
mention
that
other
types
of
things
in
the
area,
just
because
the
service
working
group
isn't
doing
a
whole
lot.
Right
now
doesn't
mean
it's
it's
dead,
it's
still
very
much
alive
and
we
look
at
it
as
being
sort
of
a
sleep
right
now.
Once
the
cloud
event
sort
of
finishes
up,
your
workflow
gets
a
little
bit
further
along
then
we'll
find
the
next
thing
to
wake.
A
It
up
figure
out
what
we're
gonna
do.
Next
spin
off
another
working
group
serve
us
we'll
go
back
to
sleep.
I,
don't
have
another
working
group
to
work
on
something
assuming
the
CNC
F
technical
Oversight
Committee
says:
yes,
that's
a
good
thing
to
look
at
okay,
all
right
time.
For
me,
one
more
question.