►
From YouTube: CNCF SIG Runtime 2020-10-15
Description
CNCF SIG Runtime 2020-10-15
A
Hello
hi:
this
is
been
from
asc,
hey
b
good
morning.
B
B
Foreign
all
right
looks
like
today
might
not
have
a
very
large
audience
so
but
yeah
thank
you
for
joining
us
and
excited
to
be
here
and
hear
about
your
project
flow,
and
I
just
we
want
to
learn
more
about
it
and
how
that
fits
into
the
cloud
native
ecosystem.
B
Yeah
then
we're
happy
to
learn
and
and
and
and
this
session
is
recorded
too
so
for
somebody
else
who
wants
to
you
know
come
in
later.
They
can
actually
see
it
too.
So
yeah
go
ahead.
A
A
So
today.
Yes,
this
is
a
project.
I
have
been
working
on
many
for
many
years,
so
it's
really
my
pleasure
to
introduce
this
to
you
so
forthflow.
It
is
a.
A
You
can
call
it
edge
computing
framework
for
computing
framework
to
support
iot
applications
on
top
of
this
cloud
and
age
environments.
A
So
it
has
a
special
kind
of
capability
to
support
applications
with
one's
advanced
programming
model
and
also
it
is
a
runtime
system
to
support
such
a
programming
mode
and
also
optimize
the
deployment
of
iot
services.
So
I
will
give
a
bit
more
detail.
So,
let's
start
with
a
little
bit
background,
so
I
have
been
working
in
the
lab
for
this
project
for
many
many
years,
so
I'm
from
ac
lab
europe.
We
are
located
in
heidelberg
germany,
so
we
initiated
this
project
nearly
four
to
five
years
ago.
A
A
So
we,
the
major
thing
we
did-
is
to
to
be
able
to
run
such
data
processing
in
between
the
sensors
and
actuators.
So
in
old
age.
This
type
of
data
processing
can
be
supported
by
different
type
of
big
data.
Processing
frameworks
like
hydro
for
batch
processing,
spark
store
for
stream
processing.
Nowadays,
flink
could
also
combine
both
with
the
unified
programming
model,
and
everything
works
very
well
in
a
cluster
environment.
A
That
was
the
way
we
do
before,
but
nowadays,
since
the
system
architecture
also
changed
and
also
their
kind
of
full
requirements
in
terms
of
latency
and
also
privacy
and
the
cost
in
order
to
support
large-scale
iot
service.
So
we
have
to
adapt
to
this
type
of
new
requirements
and
also
adapt
to
the
new
environment.
A
In
this
requirement,
we
call
it
cloud
edge
environment.
That
means
it
is
highly
distributed
and
also
have
this
hierarchy
with
different
resource
computation
resource,
including
resource
in
the
clouds
and
some
computation
resource
located
at
iot
gateways
that
can
be
deployed
on
the
street
or
in
the
factory.
A
First
then,
we
and
now,
according
to
that
we
can
easily
deploy
them
to
this
highly
distributed
infrastructure
without
having
too
much
management
complexity.
So
what
we
did
is
to
improve
or
to
overcome
the
complexities
of
food
from
both
sides.
One
is
to
have
this
easy-to-use
model.
A
We
called
intent-based
programming
model
to
program
age
applications,
on
the
other
hand,
to
have
some
kind
of
efficient
framework
to
be
able
to
manage
the
task
to
first
translate
the
programming
model
into
concrete
deployments
plan
and
then
also
optimize
the
plan
without
having
too
much
involvement
from
the
users
or
a
system
operate,
so
that
that
are
two
type
of
focus
we
we
do
in
workflow,
so
there
are
a
few
principles
that
we
try
to
follow
in
our
design
by
the
way,
if
you
have
any
questions
just
interrupt
me,
so
we
can
have
some
discussion.
A
Yeah,
so
there's
a
few
kind
of
four
principles:
we
try
to
follow
in
the
design
of
workflow,
so
the
first
part
is
we
try
to
decouple
this
kind
of
computation
logic,
how
we
represent
the
service,
the
design
phase
and
also
the
deployment
requirements
that
we
may
face
at
the
wrong
time
or
when
we
start
to
use
the
service.
A
So
that
is
the
this
the
corporate
strategy.
We
try
to
follow
and
there's
some
consideration
behind
this
principle
because
for
the
same
kind
of
service,
which
may
be
the
same
pipeline,
but
the
way
to
trigger
this
pipeline
may
be
different
may
need
to
be
different
according
to
how
we
use
it.
For
example,
if
we
use
it
for
simulation,
you
know
kind
of
cloud
environments.
We
may
not
care
about
too
much
about
the
latency.
A
A
So
that's
why
we
kind
of
separate
out
this
computation
logic
to
represent
it
as
a
kind
of
graph
which
composes,
which
consists
of
multiple
data
processing
units
each
one.
We
call
the
task
and
then,
on
the
other
hand,
we
have
this
kind
of
department
requirements.
We
call
it
intent.
A
This
basically
represents
how
would
you
like
to
trigger
such
computation
logic
from
different
perspective,
like
whether
you
want
to
trigger
that
for
a
large
amount
of
data
or
small
part
of
the
data
or
some
kind
of
service
level
objective
you
would
like
to
achieve
so
I
will
come
to
that
thing
of
later,
so
this
is
first
principle
in
our
design
and
second
principle.
A
We
try
to
make
the
data
visible
the
the
data
between
tasks,
so
in
like
sparkles
hadoop,
the
data
is
basically
it's
it's
no
only
to
the
specific
application
developer,
but
it's
not
open
to
everyone
to
share.
So
that
means
this
interprobility
and
also
capabilities
to
share
the
task
or
the
data
processing
logic
is
limited
because
you
don't
know
which
type
of
input
data
you
need
for
the
task
which
type
of
output
data
that
could
be
produced
by
the
task.
A
B
Question
so
the
intent,
basically
the
the
way
you
compute
the
the
steps
on
on
the
on
the
dac
or
or
that
that
would
be
okay
so
and
then
so
that
actually
means
the
trigger
too
right.
So
the
trigger
happens
at
the
that
level
or
it
happens
at
the
intent
level.
But
when
the
trigger
for
the
computation.
A
Yeah
yeah,
that
is
the
trigger,
including
some
kind
of
specific
or
customizable
requirement,
but
this
internet
is
a
trigger,
got
it.
Okay,.
A
And
then
yep,
so
in
terms
of
our
programming
model,
that
we
follow
the
same
way
as
this
apache
link.
Let's
try
to
unify
the
programming
model
for
different
cases,
especially
for
this
data
instantly,
intensive
iot
services,
data,
oriented
application
and
then
internally
we
try
to
optimize
this
runtime
deployment
with,
regarding
to
with
regard
to
the
high
level
service,
objective
or
constraints
like
optimize
everything
based
on
in
order
to
minimize
the
cost
or
in
order
to
minimize
the
latency
or
even
for
machine
learning
application.
A
A
So,
at
very
high
level,
you
can
see
that
foreflow
is
a
runtime
system,
on
the
one
hand,
provides
some
interface
for
application
developer
to
structure
their
applications
based
on
our
internet-based
programming
model,
so
basically
to
define
this
computation
logic
as
a
graph
and
then
by
taking
that
into
account,
and
then
with
this
intent
as
a
trigger.
Then
this
workflow
runtime
system
try
to
decide
how
to
construct
the
the
instance
of
the
task
and
then
how
to
configure
the
task
and
the
where
to
deploy
the
task.
A
A
So
this
type
of
metadata
is
also
taking
account
for
the
orchestration
task
and
then
the
usage
context
is
what
I
mean
by
intent.
It's
from
the
user's
perspective,
how
they
would
like
to
trigger
the
task.
So
we
consider
these
three
type
of
context
to
orchestrate
this
application
over
this
distributed
environment.
A
I
think
this
is
somehow
self
explained.
So,
as
I
explained
before,
we
have
a
graph.
We
call
it
service
topology,
which
consists
multiple
tasks
with
some
data
dependency
and
then
later
this
can
be
triggered
by
a
defined
intent,
object
and
just
to
mention
the
task.
Actually,
the
task
in
the
graph
refer
to
a
operator
operator
is
a
dockerized
application
which
can
take
certain
inputs
and
then
do
some
data
processing
and
then
produce
some
output.
A
The
task
is
just
try
to
annotate
this
operator
in
the
context
of
service
topology
and
then
come
to
the
point
about
intent.
This
is
a
just
a
data
structure
that
represents
covering
different
powerful
requirement.
Like
you
define
which
service
topology.
Would
you
like
to
trigger
and
here's
this
kind
of
geoscope
to
fit
out
the
data
you
would
like
to
select
and
put
into
this
computation
logic
and
then
service
level
objective?
A
It's
represent
from
different
perspective,
where
the
basically
is
your
optimization
objective
to
run
the
task
defined
from
the
course
perspective,
lantern's
perspective,
or
this
accuracy
perspective.
If
that
is
about
a
I
o
machine
learning
pipeline
and
then
the
priority
of
this
task
for
using
the
resource
in
the
overall
system,.
A
Yes,
so
then,
let
me
give
you
a
little
bit
introduction
about
the
system
itself,
so
this
is
a
layered
view
of
workflow
system.
So
at
the
top
layer
we
have
some
interface
or
graphic
user
interface
for
the
users
to
define
service
computation
logic,
which
is
the
graph
and
then
the
management
layer.
A
Topology
master
make
some
decision
based
on
the
information
come
from
this
discovery
because
discovery
and
broker
they
are
the
core
component
for
the
data
management
layer,
so
data
is
will
be
managed
by
different
broker
at
each
age
and
then
reports
which
type
of
data
can
be
available
at
local
age
report.
This
to
the
discovery
and
discovery
give
this
information
to
the
master
for
making
decision
about.
How
should
how
how
we
can
plan
the
computation
task
and,
like
basically
figure
out
how
many
tasks
we
should
prepare
for
each?
A
How
many
instances
we
should
propel
instantiate
for
each
task
in
that
graph
and
later
then
decide
well
to
deploy
and
then
once
decided
where
to
deploy
that
it
tell
the
top
worker
to
decide.
Worker
is
basically
this
to
execute
the
command
to
auto,
to
launch
the
task,
and
then
the
lower
layer
is
the
data
source
layer
or
some
device
layer
which
basically
produce
the
data
to
the
report
to
the
system,
but
also
consume
data
produced
by
the
task
and
the
data
processing
task
in
the
system.
A
So
this
may
be
more
concrete
than
that
layered
view.
So,
basically,
in
terms
of
deployment,
we
have
what
we
call
workflow
cloud
node,
and
this
is
kind
of
centralized
part,
including
all
the
core
services
we
need
for
the
second
workflow
system,
including
this
master,
for
making
the
orchestration
decision
and
discovery
to
report
availability
of
the
data
in
the
system
and
also
this
cloud.
Node
can
also
partially
be
h,
node,
which
is
more
powerful
with
more
resource,
but
normally
this
age
node
are
located
close
to
the
users
like,
like
iot
gateway
for
that.
A
So
on
the
agenda,
we
have
only
two
components:
one
broker
and
one
worker
broker
basically
manage
the
local
data
and
worker
received
the
commands
from
the
central
master
to
decide
what
to
do
for
launching
the
task
and
config
the
task.
A
A
There
so
for
for
that
we
have
yeah.
We
try
to
separate
out
because
combined-
and
we
may
have
kind
of
a
buffering
issue
to
propagate
these
commands
in
real
time
or
as
fast
as
we
can.
A
So
we
separate
two
channels
to
make
this
control
channel
to
be
more
efficient
and
fast
in
terms
of
data
management
in
workflow,
as
mentioned
before,
so
for
each
edge
node,
there's
a
broker
to
kind
of
manage
the
local
entity,
data
which
represents
some
states
of
device
or
just
represent
the
entire
iot
device,
and
then
the
central
discovery
just
index
or
this
availability
of
the
data.
A
So
this
is
one
of
the
unique
thing
is
the
data
we
kind
of
follow
a
standardized
data
model
in
this
is
what's
in
the
context
of
fiverr
the
european
community
for
iot
system.
A
So
we
define
a
standardized
data
model,
it's
called
ngsi,
but
it
covers
two
parts.
One
is
this
ngsn9
for
managing
the
metadata
part
and
just
10
for
manage
the
full
entity.
Data
both
provides
pops
up
interface,
so
workflow
is
built
on
top
of
that.
A
In
terms
of
for
you,
the
last
part
how
we
orchestrate
this
ultra
storage,
the
service
yeah.
So
basically,
this
graph
is
given
by
this
application
developer
and
then,
after
receiving
a
trigger
which
is
intense,
and
then
the
system
starts
to
figure
out
how
to
generate
this
execution
plan,
which
consists
of
the
concrete
instance
of
each
task
and
also
this
concrete
configuration
in
terms
of
which
input
data
to
put
into
which
task
and
produce
which
type
of
output.
A
And
then
this
exclusion
plan
is
just
also
a
graph.
And
then
the
last
step
is
to
decide
how
to
split
up
in
this
graph
into
different
parts.
So
then
we
can
deploy
them
to
different
age
nodes.
This
will
be
done
by
the
workers,
but
to
figure
out
how
which
task
goes
to
which
age
node.
This
is
done
by
the
master
in
workflow.
B
The
question
like
yeah
so
yeah,
so
you
have
that
execution
plan
and
that
that
is
actually
running
in
the
cloud
system
and
then
that
later
that
when
that
gets
comp
computed
and
basically
gets
deployed
or
or
as
you
have
created,
a
deployment
plan
and
the
deployment
plan
in
the
cloud
system
and
then
after
that's
created,
then
it
gets
actually
deployed
in
the
edge
environment.
B
B
A
A
A
Got
it
yeah
yeah,
so
this
is
right
now
the
the
the
open
source
version.
We
know
that
there's
a
bottom
neck
in
terms
of
calculate
this
excretion
plan
and
also
figure
out
how
to
deploy,
because
this
is
down
in
a
centralized
way
right
now.
So
that's
why?
No
also
we
have
some
kind
of
distributed
version
to
do
this
ultra
situation
decision
in
in
kind
of
distributed
manner.
That
is
a
more
advanced
version
that
we
we
we
haven't
really
released
through
this
yeah.
B
Yeah
another
question
would
be
like
something
like
the
execution
plan
and
deployment
plan
to
be
maybe
run
something
like
kubernetes
that
you
could
auto
scale
right
like
if
you,
if
you
need
more
capacity
to
create
that
execution
plan,
you
can
provision
that
are
on
demand
right.
A
Right
right,
so
that's
that's
correct
here,
it's
kind
of
one
layer
on
top
of
for
kubernetes,
but
definitely
to
deploy
this
deployment
plan.
We
can
leverage
kubernetes
as
well
like
try
to
deploy
this
so
right.
Now
we
just
integrate
with
docker
engine
directly,
but
for
the
cloud
node.
Definitely
we
can
benefit
from
kubernetes
because
we
don't
flow,
doesn't
have
to
manage
this
resource
in
terms
of
containers,
so
the
orchestration
of
container
this
can
be
managed
by
kubernetes.
A
B
A
Right
and
also
we
have
a
graphic
user
interface
to
do
this
so
right
now
you
have
to
manually
input
this
data
type.
So
in
the
next
version
we
try
to
introduce
a
kind
of
type
management
system,
because
in
workflow
the
data
model
is
standardized.
A
A
A
So
the
scenario
is
to
identify
a
lost
child
in
a
big
event
like
olympic
in
some
area,
which
has
many
different
studios
located
and
if
the
child
is
lost,
then
the
parents
can
ask
help
from
the
staff
member
and
the
staff
member
can
never
use
edge
computing
infrastructure
to
find
out
the
child
quickly,
because
many
cameras
and
age
gateway
will
be
deployed,
a
different
location
in
this
area,
and
then
we
constructed
a
sample
application
logic.
A
This
is
a
service
topology,
but
has
only
one
task
which
is
face:
matching
discuss
just
take
the
image
of
lost
child
provided
by
the
parents
and
look
at
the
video
stream
from
each
camera
and
then
to
match
that
with
the
image
of
the
lord's
child.
If
it's
identified
from
the
video,
then
it
just
reports
yes,
and
this
task
will
be
instantiated.
A
A
different
location
according
to
where
the
camera
is
located,
where
this
edge
loads
have
been,
has
been
deployed
in
this
area
and
to
trigger
that
we
need
to
define
an
intent
object
to
say
this
service
has
the
highest
priority
and
we
like
to
achieve
minimal
latency
for
doing
this,
then
the
system
will
understand
even
occupied
all
the
resources
to
to
do
this
highly
urgent
task.
B
And
then
another
question,
so
this
this
will
happen
in
real
time.
Basically,
you
specify
the
intent
and
then
you
schedule
that
to
start
doing
the
face
matching
and
then
and
then
look
around
through
all
the
edge
notes
that
have
cameras
right
to
try
to
do
that
phase.
Matching.
A
Yes,
it's
more
or
less
like
that,
but
if
you
remember
in
the
previous
slides,
we
have
also
a
scope
which
is
part
of
the
intent
we
can
define,
select
a
specific
area
which
is
the
scope
in
this
intent.
Then
we
execute
this
this
service
for
that
scope.
First,.
B
And
then,
and
in
this
case,
the
scope
will
be
what
the
just
just
so
area
in
the
on
the
map
got.
A
It
okay,
so
the
the
normal
development
process
for
folk
to
to
implement
the
four
flow
service
will
be
like
this,
because
that's
basically
maybe
I
can
skip
this
because
here
it's
more
easy
to
follow
it's
easier
to
follow.
So
there
are
three
main
elements
in
order
to
program
a
workflow
service
first,
which
is
this
operate.
A
A
Is
the
application
developer
basically
to
use
the
graphic
interface
to
construct
this
graph
by
linking
multiple
tasks
together,
then
this
design
phase
is
finished
and
then
come
to
the
operating
phase,
and
then,
whenever
we
send
a
defined
intent
and
then
issue
that
intent
to
workflow
system,
then
we
will
be
able
to
trigger
the
fork
flow
system
to
deploy
a
service.
This
deploy
the
service
for
what
you
have
already
asked
in
this
intent
object.
A
A
For
the
developer
of
the
operator.
You
don't
have
to
carry
about
the
other
part.
Only
to
say
you
have.
You
will
receive
this
type
of
input
data
and
then
perform
your
internal
data
processing
logic.
If
you
have
some
data
to
report
back
to
the
system
and
you
can
use
the
publish
and
callback
function
to
publish
the
result
back
to
the
system,
the
other
two
are
optional.
If
this
input
data
provided
by
folkfall
system,
it's
not
enough.
You
need
to
query
or
subscribe
some
extra
information.
Then
you
can
use
the
other
two
functions.
B
B
A
Programming
model,
which
is
called
fork
function,
but
the
fork
function
is
a
special
case
of
one
server
service
topology,
which
is
very
simple,
contains
only
one
task,
plus
a
default
intent.
This
default
intent
is
just
to
do
like
to
do
this
for
the
entire
system
scope
and
with
always
minimal
latency.
A
A
It's
also
slightly
different
from
the
the
traditional
service
func
computing,
because
here
the
triggering
is
only
based
on
the
availability
of
the
data
we
required,
but
for
the
traditional
service
computing
you
need
to
have
an
event.
This
event
can
be
a
http
request
which
provides
input,
data
or
can
be
some
storage
system
so
which
is
a
more
kind
of
generic
and
in
workflow
case
we
borrow
the
idea
of
service
computing,
but
it's
a
little
more
specific
to
data
oriented
and
only
triggered
that
when
the
data
you
required
became
available
in
the
system.
A
Okay,
so
that
is
and
then
the
second
step
is
to
define
this
service
topology.
As
I
mentioned
before,
we
have
this
graphic
user
interface
to
compose
this
graph
with
multiple
tasks
together
and
then
we
also
provide
this
user
interface
to
define
a
service
in
this
intent,
which
right
now
cover
this
different
perspective.
This
polygon
is
to
specify
a
scope
to
trigger
this
computation
logic
defined
by
the
surface
topology.
A
A
A
Also,
yes,
so
we
got
some
another
full
request
from
researchers
from
different
universities
because
they
figure
out
for
pro
it's
easy
to
extend
for
different
ideas,
especially
some
optimization
of
this
orchestration
for
data
oriented
applications
in
this
card,
age,
environment,
and
so
overall,
we
are
kind
of
promoting
this
workflow
in
this
five-year
community.
A
I
mentioned
the
very
beginning,
so
here
I
have
a
little
bit
more
to
say
about
community
of
fibril.
It
is
mainly
driven
by
these
european
countries
for
iot
in
general,
including
in
different
areas,
next
many
smart
city
and
now
also
smart
industry.
A
A
Features
like
for
management,
for
data
management
interface
with
devices
and
for
security
and
for
data
processing
visualization,
so
for
flow.
It's
one
open
source
component
in
this
chapter
in
terms
of
data
processing,
we
also
kind
of
promote
business
together
with
all
the
other
partners
in
this
community
in
terms
of
research
by
looking
ahead.
So
we
are
currently
extending
this
orchestration
to
have
a
distributed,
or
we
call
it.
A
Autonomous
orchestration
means
this
orchestration
not
just
carried
out
in
the
clouds
in
a
centralized
way,
but
can
be
carried
out
distributed
by
the
local
age
as
well,
but
they
are
able
to
collaborate
with
the
other
system
and
compared
with
the
other
existing
like
age
computing
framework
or
or
for
computing
framework,
as
folkflow
is
kind
of
more
data
oriented,
because
it's
based
on
a
standardized
data
model.
We
called
ngci
because
of
that
it
has
an
opportunity
to
support
kind
of
data
sharing
platform.
A
A
Yes,
I
today
I
mentioned
the
overall
design
principle
of
workflow
and
also
introduced
some
of
the
key
parts
like
this
intern
based
programming
model
and
orchestrating
the
service
based
on
the
defined
intent
and
also
some
some
way
to
do
this.
Optimization.
According
to
this
defined
service
level,
objective
in
the
intense
programming
model,
as
there's
a
community
for
foreflow,
we
try
to
support
it
with
our
best
efforts
yeah.
Hopefully
we
can
leverage
workflow
to
support
more
application
in
both
in
5g
area,
for
future
applications
or
for
ai
application.
A
In
this
context
of
data
sharing
platform,
yeah,
that's
all
from
my
site.
If
you
have
a
question
yeah,
we
have
more
discussion
thanks.
B
B
Are
there
any
plans
to
have
some
sort
of
reference
architecture
around
kubernetes
and
then
to
deploy
some
of
the
the
workloads,
and
I
mean
at
the
edge
and
also
in
the
cloud
running
on
top
of
kubernetes?
But
that's
not
in
the
plan.
A
A
So
especially
for
the
cloud
notepad,
because
then
we
we
just
rely
on
this
kubernetes
setup,
then
to
launch
part
of
the
deployment
plan.
If
we
say
this
part
of
the
deployment
plan
is
to
be
carried
out
on
the
air
on
on
the
cloud
node
on
the
cloud
parts,
then
this
will
be
done
automatically
by
using
kubernetes
api.
So
then
we
don't
have
to
worry
about
like
reliability
or
restart
the
this
docker
when
there's
something
wrong,
so
this
will
be
very
well
supported
by
kubernetes.
A
B
Awesome
yeah,
yeah,
yeah
k2s,
is
also
a
cncf
project.
Yeah.
I
have
a
another
question
about
the
secure
data
sharing
platform.
Is
it
is
this
kind
of
going
to
be
a
hosted
service
as
a
platform,
or
this
is
going
to
be
more
of
a
software
that
people
could
run
and
basically,
you
know,
set
up
their
own
data
sharing
platform,
maybe
in
a
federate
federated.
A
Way,
yes,
basically,
it
will
be
a
federated
environment.
For
example,
you
know
in
europe
people
talk
about
this.
This
idsa
is
international
data
space.
There's
a
big
kind
of
construction
about
this.
Also
about
the
like
this
guide
x,
which
is
this
cloud
computing
vision
of
europe,
because
you
know
europe
is
quite
fragmented,
so
this
infrastructure
is
also
need
to
be
federated.
A
So
in
terms
of
this
data
sharing
platform,
we
are
thinking
that
so
different
companies
or
different
countries.
They
will
be
able
to
host
their
part
of
this
data
sharing
platform
and
then
there's
some
kind
of
process
that
to
authenticate
who
can
join
this
kind
of
data
sharing
platform
and
how
they
can
communicate
with
each
other.
So
right
now,
because
there's
no
kind
of
agreement
about
the
common
data
model
between
different
hosted
system.
A
So
it's
just
then
people
decide
to
go
to
this
up
level,
so
this
interprobability
between
different
hostage
platform
can
be
based
on
semantic.
A
It
is
a
much
higher
level
because
then
there's
no
need
to
say
what
specific
data
model
you
need
to
have
as
known
as
this
data,
it
can
be
represented
as
a
meaningful
in
meaningful
way
like
at
the
air
force,
some
kind
of
semantic
related
technology.
That
was
the
result
for
because
it's
hard
to
have
a
common
agreement
on
the
specific
data
model.
A
It's
also
competition
between
different
companies,
so
forthflow
can
play
just
one
role,
because
the
other
company
can
also
provide
kind
of
data
sharing
platform
as
long
as
they
can
get
certificate
from
this
this
conversion,
so
every
company
can
can
provide
some
kind
of
implementation
for
this
data
sharing
platform.
A
They
have
some
yeah
some
kind
of
regulation,
so
basically,
this
organization
should
be
able
to
check
which
type
of
security
or
the
usage
control
you
can
provide
when
they
start
to
check
your
system
and
give
you
a
certificate.
They
will
check
whether
you
can
provide
this
type
of
privacy
guarantee
or
that
level.
So
they
have
some
kind
of
brain
blueprint,
but
it's
still
ongoing
because
carry
out
this
in
practice.
It's
actually
not
easy.
A
B
C
I
just
wanted
to
thank
ben
for
for
the
awesome
presentation
and
a
very
interesting
use
case
of
a
lost
child
and
happy
to
see
that
the
project
is
considering
using
kubernetes
and
k3s
as
a
back-end.
Ak-3S
has
a
exactly
the
same
api's
kubernetes.
So
there
shouldn't
be
any
compatibility
issues.
A
Yes,
we
are
also
yeah,
I'm
very
happy
to
really
integrate
with
kubernetes,
because
we
see
that
has
really
generated
a
big
impact
in
the
community,
especially
for
cloud
computing,
and
we
see
if
we
move
for
flow
to
a
production
level.
And
then
we
have
to
integrate
with
kubernetes.
A
B
B
Your
question
was
more
about
this
secret
time
or
some
other
interest
groups
right.
It's.
A
Just
for
this
discussion
and
to
to
know
this
latest
progress,
or
do
you
have
some
specific
targets
for
each
interest
group
to.
B
Yeah,
so
I
I
think
the
the
main
roles
of
the
six
who
are
part
of
the
toc
sort
of
like
an
extension
of
the
tlc
toc
stands
for
technical
oversight.
Committee
and
felina
is
one
of
the
members
of
the
toc,
so
the
toc
makes
decisions
about
projects
that
want
to
join
the
foundation
and
then
also
sets
a
framework
for
these
projects
on
maturity
level.
Whether
this
project
is
entry
level
more
like
sandbox
or
like
incubation
or
more
like
there's
a
the
level.
B
B
Get
more
adoption
right,
so
yeah
and
then
so
the
six
are
an
extension
of
this
tlc
and
so
there's
a
signal
in
time.
There's
a
sick,
observability,
there's
a
sick
application
delivery
so
for
different
areas.
B
So
this
is
just
sick
run
time
and
it's
mostly
about
workloads
and
cloud
kubernetes
is
in
the
scope
and
gets
sick
too,
but
there
are
some
other
projects
like
iai
type
of
projects
or
or
containing
runtimes.
B
So
so
the
goal
of
the
effect
is
engaging
the
communities.
So
we
have
these
meetings
every
two
months
or
every
sorry.
Every
month,
ever
twice
so
every
two
weeks
yeah
in
in
engage
the
community,
so
they
get
interested
in
this
against
their
projects.
We
started
using
more
of
its
projects.
B
B
A
Yeah
yeah
thanks
thanks
thanks
for
the
introduction,
I
I'm
quite
new
to
this
community
but
yeah.
To
be
honest,
I
think
this
is
really
very
cool
community
that
we
see
it's
growing
very
fast
and
moving
fast.
You
know.
B
Yeah
yeah,
so
we're
happy
to
have
you
involved.
I
know
you
you're
part
of
fireware,
so
I
mean
you're
part
of
a
different
community,
but
it's
open
source.
So
anybody
is
welcome
to
participate
and
contribute
and
get
engaged.
So.
A
B
B
All
right
so
yeah
have
a
great
rest
of
your
day
and
yeah
I'll
see
you
next
time:
okay,
yeah!
Thank
you
very
much.
Yeah.