►
From YouTube: CNCF SIG Runtime 2020-12-17
Description
CNCF SIG Runtime 2020-12-17
A
A
B
B
Yeah
good
morning,
how
are
you
good?
I?
I
don't
think
we
have
a
lot
of
people
probably
joining
today,
because
it's
kind
of
close
to
the
holidays,
but
in
any
case
the
presentation
has
been
recorded,
so
it
will
be
uploaded.
But
just
let's
give
it
like.
Maybe
a
few
minutes
to
see
some
more
people
join
and
then
maybe
we'll
get
started.
B
C
B
Yeah
yeah,
but
we
get
audience
from
several
places
so
yeah
we
do
have
a
bunch
also
on
the
bay
area
too,
but
but
also
people
from
other.
B
D
B
E
B
C
B
Yeah
yeah,
so
you
know
the
the
sick
notion,
I
think,
starting
with
kubernetes
right
with
all
the
different
scopes
of
kubernetes.
You,
like
signature,
sig
network,
sick,
api
machinery.
I
think
so
and
then
because
the
cncf
started
to
grow
so
kubernetes
I
mean
cncf
started
with
kubernetes
right,
but
the
cncf
started
to
grow
and
and
and
different
projects
started
to
join
the
foundation
or
started
to
be
donated
to
the
foundation
and
in
in
it
it
was
all
cloud
native
but
not
related
to
just
kubernetes.
B
So
I
think
this,
the
the
and
then
the
cncf
has
or
had
this
structure
with
the
toc
where
they
they
looked
at
these
projects
and
they
reviewed
them
and
and
they
decided
whether
they
would
they
would
be
a
good
fit
for
the
foundation
and
and
and
and
then
with
these
stages.
B
Like
sandbox,
you
know
incubation
and
graduation,
but
I
think
what
happened
is
that
so
many
more
projects
started
to
get
interested
in
joining
the
foundation
and
the
toc
became
kind
of
overwhelmed
with
all
this
different
requests
and
and
and
then
the
meetings
were
only
like-
I
think
twice
a
month,
and
so
there
were
there,
wasn't
enough
room
to
to
provide
to
the
community
or
provide
time
for
the
community
to
present
and
and
engage
the
projects.
So
they
started
these
six.
B
You
know
with
the
different
scopes
right
with
networking
runtime,
you
know
ci,
cd
and
and
yeah-
and
this
is
this
is
where
we
are
right
in
in
so
we
help
out
the
the
toc
and
and
engaging
more
the
community
with
the
different
projects
and
and
so
they
can
present
and
give
status
on
some
of
the
projects,
and
some
of
these
projects
are
not
even
part
of
the
foundation,
so
they
may
join
the
foundation
or
they
may
be
part
of
some
other
foundation.
But
we
just
want
to
engage
the
community.
B
B
And
I
I
think,
there's
a
ticket
open
from,
I
think
liz
she's
a
to
she
tlc
chair
and
she
opened
a
ticket
on
renaming
some
of
the
six,
so
it
hasn't
happened,
and
so
it
may
happen
next
year
because
there's
confusion
about
what
between
the
naming
you
know
this
is
called
sick
and
the
kubernetes
is
also
called
six
right.
So
so
it
we
might
change
the
name.
I
don't
know
sometime
next
year,
so
yeah
sure,
okay,
yeah,
we
can
get
started.
We
got
quentin
happy
holidays.
B
Thanks
for
joining
yeah,
we
can
get
started
yeah.
Okay,
let
me
try
to
do
this.
A
B
C
A
Yeah,
you
see
the
slice
yeah
sure
before
we
start,
I
actually
just
sorry.
I
joined
a
couple
of
minutes
late.
I
just
wanted
to
check
and
we
we
have
very
few
people
here.
It's
basically
the
the
chairs
and
and
the
toc
representative.
A
Do
we
is
this
a
general
problem,
or
is
this
because
it's
late
in
december
or
do
we
need
to
do
anything
about
that
yeah?
I
think
it's.
B
One
of
the
reasons
is
it's
late
in
the
in
december,
but
we
could
use
more
audience
to
write
during
even
november.
I
think
it
was
you
know
generally
like
eight
or
people.
Oh
okay,
you're.
A
Yeah,
I
can
quite
imagine
I
was
I
struggled
personally
to
get
in
here.
You
know
the
the
password
is
not
in
invite
and
you
have
to
kind
of
dig
into
the
links
and
the
thing
to
the
notes
to
then
find
the
password.
So
I
wonder
if
there
isn't
a
bit
of
friction
there
as
well.
That's
that's
not
helping.
Maybe
maybe
we
need
to
do
a
little
bit
of
an
outreach
and
just
tidy
everything
up.
So
it's
easy
for
people
to
find
us
and
join
the
meeting.
A
B
It's
like
division
yeah
that
sounds
like
yeah,
something
we
can.
We
can
do
more
on
next
year
to
try
to
get
more
people
right,
so
yeah
sounds
great
quentin.
Maybe
we
can
actually
have
a
meeting
or
something
and
kind
of
brainstorm
on.
You
know
offline
right,
yeah,
thanks,
good
good.
That
makes
sense.
Yeah.
C
Yeah
sure,
good
morning,
everyone
today
I'm
going
to
level
you
this
meeting
to
express
the
open
yard
which
we
just
joined,
the
cncf,
the
sandbox
project.
This
is
the
today
I'm
going
to
briefly
describe
what
his
opinion
is
about.
Why
we
develop
this
and
what
is
a
main
functionality?
C
What
problem
is
solved
and
what
may
be
the
next
direction
of
this
project
to
the
audience?
So
the
okay?
This
is
the
agenda
of
my
today's
talk.
I'm
going
to
first
describe
the
background
of
the
problem
a
little
bit,
because
I
guess
many
people
already
know
about
edge
computing,
kubernetes
et
cetera,
and
particularly
particularly
we'll
describe
some
challenges
of
using
a
kubernetes
to
energy
edge
use
cases.
C
C
Well,
in
the
cloud
computing
everything
people
put
data
workloads
to
the
cloud
and
run
a
service
in
the
cloud
and
get
a
result,
but
for
the
azure
computing
it
is
kind
of
doing
in
opposite
way
in
a
sense
that
the
actual
computing
happens
in
the
edge
side,
regardless
it's
the
far
edge
or
near
edge
or
just
edge.
And
there
is
a
centralized
place
to
collect
the
data
and
do
the
orchestration.
C
Happens
in
the
cloud
maybe-
and
that's
the
kind
of
a
common
way
that
people
did
or
happens
in
the
on-prem
cloud
in
on-prem
system,
but
anyway
the
idea
is,
we
run
in
general.
The
workload
is
running
the
edge
and
the
workloads
run
kind
of
distributed.
Algorithms.
You
can
think
about
that.
Like
map
reduce
big
data,
even
ai
algorithms,
they
take
the
algorithm
taking
the
local
data
as
an
input.
So
then
they
send
results
back
to
the
center
and
the
entire
workload
most
likely.
We
are
going
to
be
managed
by
the
center.
C
There
are
a
few
characteristics
that
are
driving
this
model
to
the
production
and
the
people's
tends
to
use
it,
because
this
model
has
a
few
advantages
like
they
have
low
latency
in
terms
of
the
response
time
of
finally
data,
because
the
network
transformation
delay
is
kind
of
reduced,
they
have
lower
bandwidth
requirements
in
the
public
networks,
because
many
data
data
analysis
and
the
computation
happens
in
the
edge
side.
C
There
are
some
kind
of
autonomy
requirements
in
the
sense
that,
even
if
the
cloud
edge
network
connection
is
down,
the
application
still
can
runnable
in
ad
and
provide
a
service
which
is
pretty
important
in
some
like
content,
library
use
cases
and
the
last
one.
Last
but
not
least,
they
have
better
security
and
privacy
model,
because
many
data,
some
sensors
sensitive
data,
does
not
has
to
be
sent
to
the
cloud
which
can
be
a
concern
for
many
people.
C
Going
beyond
that,
there
is
a
kind
of
new
model.
It's
called
cloud
edit
computing.
Basically,
it's
kind
of
a
cloud
platform
that
unites
the
cloud
and
agiles
and
iot
devices.
So
this
is
it's
a
compelling
ad
company
there's
a
this
is
a
direction
of
you
from
the
provider's
perspective.
C
Basically,
there
is
a
cloud
which
manages
which
around
the
centralized
control
plane
and
in
the
edge
side,
they
have
compute
compute
nodes
that
are
closer
to
the
device
and
data,
and
there
is
a
device
layer
which
which
points
to
all
the
you
know
remote
devices
that
can
people
can
connect
to
and
points
to
the
remote
environments.
Those
devices
typically
are
the
data
generator
and
which
provides
the
data
enhanced
by
all
the
applications
running
the
edge.
C
So
this
is
a
kind
of
pretty
interesting
architecture
or
framework
in
terms
of
the
edge
computing.
Nowadays.
Briefly
talk
about
the
kubernetes.
It
is
a
container
translation
platform.
The
nice
thing
of
kubernetes
is
kind
of
because
it
has
a
nice
abstraction
of
all
infrastructure
layer
with
well-defined
sort
of
well-defined
apis.
C
They
can
provide
a
unified
user
experience,
regardless
which
environment
they
use,
even
in
the
private
account
or
public
cloud.
It
is
a
very
popular,
especially
very,
very,
very
powerful
point
of
the
kubernetes
and
after
a
few
years
of
the
grow,
I
think
has
a
pretty
strong
ecosystem.
C
It
has
a
lot
of
you
know:
controllers
of
plugins
or
solutions,
so
it
would
resolve
all
kinds
of
use
cases,
applications,
and
I
think
one
of
the
most
important
benefit
of
the
quinet
is
is,
it
is
very
highly
extensible
and
the
use
of
crd
makes
you
mention
all
the
the
broader
use
of
crd
make
the
people
make.
C
The
air
system
is
very
easy
to
be
integrated
into
the
other
system
or
provide
or
extend
its
ability
to
feed
the
business
logic,
so
that
that's
my
reason,
that's
a
reason
why
people
start
to
develop
all
kinds
of
solution
based
on
kubernetes
to
resolve,
including
that
competing
problem.
C
I
think
kubernetes
itself
is
kind
of
feeds
on
kind
of
fits
on
the
edge
computing
model,
because
it
is
still
a
layered.
You
know
design,
it
has
a
centralized,
it
has
a
centralized
control
plane
under
the
nose
and
they
are
they're
kind
of
layered
design
and
and
it
is
a
distributed
system.
So
all
the
design
like
mechanism
are
using
the
the
the
kind
of
has
to
consider
the
case
that
it
is
running
a
distributed
system,
the
reliability.
C
They
need
to
resolve
all
the
kind
of
reliability
issues,
entire
controller
logic.
The
reconcile
logic
can
tolerate
in
a
transient
failure,
because
it
has
a
retry
logic
and
all
the
least
watching
mechanism,
which
is
used
in
most
controllers
and
many
plugins
are
designed
to
handle
failures
and
especially
in
edge
case.
The
failure
is
really
the
failure
scenario
we
need
to
handle
more
to
the
scenario
or
more
problems
in
terms
of
the
reliability
availability
issues,
because
there
is
a
obvious.
You
know.
C
Differences
compared
with
cloud
computing
is
that
the
environment
in
the
edges,
the
networking
to
the
cloud
to
edging
and
working
is
not
very
reliable.
So
so
so
so,
if
you
look
at
a
kubernetes
layer,
so
it
kind
of
fits
the
edge
computing
layer
and
the
nodes,
the
only
differences
and
the
nodes
can
be
now
presented
or
manage
it
in
the
ad
side.
C
Instead
of
the
cloud
cloud
side
and
the
the
application
running
in
the
edgy
nodes
can
still,
you
know,
manage
and
manipulate
the
devices
and
that
spread
everywhere
so
yeah.
So
that's
a
reason
we
think
you
know.
Kubernetes,
naturally
can
fit
the
edge
architecture,
but
for
from
very
high
level,
but
indeed
we
will
have
have
some
challenges.
If
you
really
want
to
use
kubernetes
in
as
use
cases
now,
I'm
going
to
describe
some
of
the
challenges
that
we
have
been
found.
C
We
we
have
been
found
and
and
some
some
some
of
the
points
that
we
are
going
to
address
in
the
yacht
project.
So,
okay,
here
are
a
few
challenges
in
terms
of
the
how
to
leverage
how
to
use
kubernetes
to
manage
the
azure
azure
application.
The
first
one
is
on
reliable
network.
It
is
kind
of
common
in
the
edge
in
in
in
the
case
that
the
cloudy
edge
networking
is
not
very
reliable
and
in
some
sometimes
this
network
problem
is
not.
Hardware.
Problem
sometimes
can
be
a
you
know,
underneath
an
administrator.
C
It's
a
decision
because
there
are,
there
are
use
cases,
the
people
kind
of
local
admin
just
trying
to
avoid.
You
know
public
cloud
networking
and
they
just
can
cut
off
the
network
activities
for
whatever
reason
like
to
save
the
bill,
save
the
cost.
So
so,
in
a
word
that
it
is
kind
of,
you
should
expect
that
sometimes
the
the
cloud
and,
as
you
know,
there's
no
network
connection,
and
sometimes
this
is
not
just
a
transcendental
part.
It
is
not
a
transient
problem.
It
can
takes
for
quite
a
long
time.
C
There
is
no
cloud
and
an
edge
node
connection.
This
is
prover
metic
for
the
kubernetes
architecture,
because
the
node,
the
the
kubernetes
running
a
node
is
not
a
toler.
It
is
a
stateless,
but
it
has
to
relies
on
talk
to
the
api
server
to
to
get
all
object
status
once
it
restarts.
C
Then
we
have
a
problem
that
if
the
node
and
the
api
server
is
not
connected
and
the
node
is
the
restart
all
the
previous
running
point
cannot
be
recreated
by
the
kubernetes,
because
kubernetes
just
cannot
simply
get
a
states
of
the
parts
in
order
to
get
a
swag
of
the
parts.
So
that's
one
problem.
C
Another
thing
is
in
the
cloud
side.
We
only
have
a
problem.
We
also
have
problem
if
the
cloud
and
no
there
is
no
narrow
connection,
because
api
server
relies
on
the
kubernetes
to
send
the
hobbits
to
indicate
the
health
of
the
node.
There
is
no
controller
in
the
committees
if
it
doesn't
receive
if
api
server
doesn't
receive
the
hobbies
for
sometimes
like
say
minutes
now,
the
node
controller
will
think
that
node
is
offline
and
from
apps
perspective,
node
controller
rear
starts
to
indicate
parts.
C
Now
this
is
another
problem
that
people
in
the
cloud
side
will
find.
The
part
is
getting
deleted.
If
there
is
no
cloud
and
engine
networking,
another
problem
is
kind
of
kind
of
a
unique
in
the
edge
side.
It
it
is.
The
narrow
connection
may
be
okay,
there's
not.
There
is
network
connection,
but
maybe
it's
one
directional
unidirectional
network.
Basically,
it
is
okay.
I
use
the
snap
technique
to
to
allow
the
edge
node
to
access
the
public
cloud.
C
Api
server,
but
the
reverse
direction
is
not
allowed
because
for
for
all
kinds
of
reasons,
because
the
action
may
stay
in
the
internet.
So
then
the
problem
is
that
if,
if
the
cloud
set
cannot
access
node,
some
well
known
or
some
people,
a
lot
of
people
relies
on
the
kubrick
apis
to
retrieve
log
and
use
esc
to
do
the
debugging.
C
This
kind
of
functionality
is
broken
if
there
is
no
narrow
connection
from
the
cloud
to
the
edge.
So
that's
another
problem
that
we
are,
we
are
going
to
face
in
the
edge
scenario.
The
third
one
is
the
there
is
a
need
for
poor,
well
application
management,
or
what
I
mean
for
here
is
as
you
you
can
think.
Think
of
it
is
like
a
region
because
it
is
kind
of
common
that,
in
the
edge
in
edge
use
cases,
there
are
a
bunch
of
nodes
that
spread
across
different
regions.
C
The
region
can
be
a
you
know,
factory,
it
can
be
idc
some.
Some
independent
idc,
so
there's
so
so
the
environment
would
be
much
much
more
complicated.
Comparable
is
compared
to
your
thing,
the
centralized.
You
know
cloud
provider
cases
now
in
those
cases.
If
the
application
runs
in
different,
let's
say
regions,
they
may
have
different.
C
You
know
network
set
up
that
different
network
configuration
and
and
and
it'll
be
nice,
and
it
is
kind
of
required
that
we
we
can
have
a
ability
to
to
do
the
inter
intra
region
management
such
that
if
the
applications
try
to
avoid
running
applications
in
same
application,
replicas
spread
to
different
regions,
and
if
there
are
there
are
service
and
they
want
to
talk
to
each
other
and
they
should
try
to
avoid
making
the
cross-region
networks.
C
So
this
is
another
kind
of
requirements,
so
in
open
yard.
We
are
trying
to
address
these
three
problems.
Maybe,
but
there
are
also
indeed,
there
are
also
other
problems
which
are
currently
not
handled
by
open
yard
upon
it
points
to
our
future
work.
The
obvious
problem
is
the
resource
requirement.
I
think
this
this
work.
C
This
was
a
big
problem
in
a
few
years
ago,
because
I
I'm
guessing
you
guys-
are
pretty
familiar
with
the
kubi
edge
and-
and
I
think
one
of
the
main
advantage
of
qubi
id
is
you
can
try
to
resolve
the
resource
requirement
problem,
because
I
think
in
a
few
years
ago
the
the
aging
note
that
running
the
entire
software
staff
serve.
The
stack
for
the
communities
has
a
limited
resource
in
terms
of
the
cpu
and
memory,
but
indeed
the
kubernetes
node
components
such
as
kubernetes
coupe
proxy.
C
Even
the
cri
consume,
not
a
huge
amount
of
resource,
but
it's
kind
of
a
non-negible
kind
of
amount
of
resource
to
run
those
node
components
which
can
be
a
problem
to
to
to
to
to
run
kubernetes
in
edge,
but
but
in
the
yard
we
are
not
going
to
handle
this
problem
in
this
moment.
C
In
the
next
couple
of
slides,
you
will
see
in
open
yard.
We
are
trying
to
leverage
all
these
existing
kubernetes
components,
so
it
it
so
we
kind
of
have
to
rely
on
the
kubernetes
community
to
do
the
you
know,
cost
reduction,
or
all
these
you
know,
node
components
and
the
last
one
is
the
device
management.
So
at
this
moment
I
don't
think
that
we
have
a
kind
of
you
know,
especially
in
the
kubernetes
side,
for
the
device
management.
C
There
is
no
kind
of
standard
motor
on
an
abstraction
people
used
to
just
create
their
own
crds
to
describe
our
device
like
what
questions.
This
part
is
kind
of
pretty
open
area
we
opened
didn't
handle
it
because
we
don't.
We
don't
have
a
supports
in
the
north
side
to
support
the
various
iot
protocols,
like
I'm
qtt
protocol.
We
we
don't
do
that,
but
it
is
indeed
one
of
our
future
direction
to
you
to
support
that,
but
we
probably
will
do
some
other
ways.
C
Is
that
instead
of
create
everything
from
scratch
by
our
own
or
by
the
open
yard
community,
maybe
we
can
leverage
something
existing
framework
or
or
infrastructure
or
yes
or
system
that
pre
opens
us.
The
open
source
system
that
are
primarily
designed
for
managing
iot
devices
only
so
we
we
are
thinking
about
doing
that
kind
of
integration
in
the
in
the
future,
all
right,
so
so,
unless
I'm
I'm
going
to
describe
the
high
level
design
of
the
entire
open
yard.
C
So
the
basic
idea
is
the
the
primary
design
principle
that
for
the
open
r
is,
we
are
trying
to
extending
the
kubernetes
without
an
intrusive
modification
to
all
the
core
components
so
come
to
so
with
with
this
primary
goal,
then
we
can,
we
can
kind
of
achieve.
The
second
goal
is
like
we.
C
We
are
trying
to
have
a
solution
that
is
a
fully
compatible
result
upstream
kubernetes
from
api
perspective,
but
we
are
indeed
trying
to
solve
some
problems,
like
you
know,
support
some
very
typical
edge
characteristics
like
autonomy,
the
cloud
edge,
communication
problem
and
some
poor
wear
management,
and
they
want
to
support
the
heterogeneous
computer
computer
in
the
edge
side,
and
we
want
our
solution
to
be.
C
You
know
kubernetes
native,
in
a
sense
that
we
we
we're
trying
to
resolve
everything
by
implementing
the
azones
plugins
and
we
we
will
quite
on
the
entire
opinion.
We
are
going
to
be
keep
up
with
the
upstream
release.
So
so,
once
the
upstream,
you
know
release
a
new
version.
We
will
release
a
follow-up
release
within
a
couple
of
weeks.
That's
that's
our
plan.
C
Hopefully,
and
we
have
we
try
to
achieve
the
100
api
capability
in
the
sense
that
everything,
if
you
have
a
operator
application,
whatever
yamo
that
works
in
you,
know,
cloud-based,
you
know
cluster.
You
should
work
in
a
yard
cluster
without
much,
I
think
without
any
modification
from
the
user's
perspective.
That's
our
goal.
C
All
right,
so
these
fig
these
figures
have
kind
of
briefly
described
the
high
level,
the
open
yard
architecture.
So
it
has
two
parts,
so
it
has
carbside
components.
It
has
outside
components
and
size
components,
other
other
components
running
the
node
cloud.
Center
controllers
is
kind
of
pretty
typical.
It
is
just
a
bunch
of
kubernetes
controllers
crds
and
we
have
a
terminal.
We
have
a
tunnel
server
running
in
the
cloud
side,
so
that
is
pretty
much
the
cloud
a
bunch
of
controllers
and
azones
and
in
the
outside
there
are
about.
C
There
are
some
components
which
the
most
important
one
is
the
yacht
hub.
So
it's
kind
of
kind
of
it
is
a
proxy
to
proxy
the
traffic's
flow
from
the
node
side
to
the
to
the
to
from
outside
to
the
api
server,
but
the
reverse
direction
is
handled
by
a
kind
of
tunnel
service
which
we
called
a
tunnel.
If
you
look
at
this
figure,
we
have
a
tunnel
server
and
a
tunnel
agent,
so
this
tunnel
system
was
in
was
invented
to
handle
the
traffic
flow
from
the
cloud
side
to
the
edge
side.
C
All
the
existing
kubernetes
components,
like
you
know,
kubernetes,
cooling,
proxy
or
flannel
other
existing
azoms,
there's
no
change
required
on
their
sides
and
the
only
change
that
they
probably
need
to
do
is
they
need
to
point
to
the
you
know
yacht
hub
when
in
in
its
kind
of
start
parameters
instead
of
points
to
the
api
server
directory?
But
that's
all
the
configuration
problem
is
not
exactly.
There
is
no
code
chain
required
and
we.
A
Can
I
can
I
interrupt
you
just
for
one?
Second,
I
have
an
a
question
and
unfortunately
I
have
to
drop
off
the
call
shortly
to
go
to
another
meeting.
Thank
you
for
a
first
of
all
great
presentations.
This
looks
like
a
really
interesting
project.
I
couldn't
help
but
notice,
there's
a
lot
of
similarities
with
cube
edge,
of
course-
and
I
was
just
wondering
I
understand
you
know:
huawei
started
the
cube
edge
project
and
you
guys
compete
almost
directly
with
huawei
in
many
spaces
other
than
that
kind
of
concern.
A
Are
there
any
reasons
why
you
decided
to
build
a
separate
system
from
cube
edge
rather
than
become
a
contributor
to
cuba?
Does
this
solve
some
problems
that
cube
edge
does
not
plan
to
solve
or
any.
C
Okay,
I
can
answer
that
that
questions
so
for
for
kubi
edge.
I
don't
know
if
you
guys
are
familiar
with
that.
The
biggest
problem
that
I
see
is
it
changed
the
core
coverage,
so
it
kind
of
they
want
to
resolve
the
resource
requirement
problem
problem
a
few
years
ago.
So
their
decision
is
to
completely
rewrite
kubernetes,
get
rid
of
a
lot
of
things
and
even
they
change
the
basic
communication
model
betw
that
are
used
in
kubernetes
like
the
lead
works.
They
use
the
web
socket
based
communication
channel.
C
They
create
create
their
own
websocket
channel
to
communicate
between
the
ad
side
and
cloud
side.
So
now
the
problem
of
changing
kubernetes
is
api
compatibility.
So
there's
a
problem
that
we
face
and
some
of
our
our
customer
was
asking.
They
are
hesitated
of
using
the
kubi
id
because
they
worry
about
api
capability
and
they
they
gather
because
they
read
write
a
copy
of
many
kind
of
combination
apis.
C
C
They
create
edge
hub
to
bring
back
some
of
many
of
the
kubelet
functionality
back
on
it,
so
that
that
so
that's
you
know
you
you,
you
will
see
the
in
the
problem
there,
so
so,
overall,
I
think
the
biggest
difference
is
from
high
level
is,
I
believe
they
are
going
to
have
their
own
community
because
they
there
are
a
lot
of
solutions
tied
on
their
current
architecture
and
the
way
that
they
make
the
things
work
there.
It
is
okay,
because
there
are
a
lot
of
contributors
there.
C
There
are
a
lot
of
users
there,
but
they
probably
have
their
own
community,
but
in
our
side
we
are
trying
to
leverage
the
kubernetes
community.
So
that's
the
reason
we
build
this
system.
So
if
you
look
at
all
the
way
that
readers,
the
api
capability,
api
compatibility
is
always
our
primary
concept,
so
we
are
trying
to
make
sure
once
the
system
is
built
so
intel
the
applications
that
running
in
the
kubernetes
community
can
be
run
in
the
system.
C
So
that's
that's
a
pretty
pretty
much
our
goal
and,
as
as
you
can
see
so
because
the
kubi
id
has
been
that
way,
it
is
difficult
to
make
them
to
convert
to
a
plugin
based
solution.
So
so
that's
a
reason
we
are
trying
to
think
about.
How
can
we
solve
the
problem
without
you
know
changing
the
core
components
by
just
okay.
Thank
you
that
answers
my
question
very
well.
Thank
you
very
much.
C
B
You
have
a
follow-up
question
on
that,
so
the
what
are
some
of
the
implications
of
like
not
modifying
the
cubelet
code
or
or
the
changes
that
a
cube
edge
is
doing
for
this
project
right
so
like
does
that
mean
like
performance
wise,
you
might
have
some
penalty
or.
C
Okay,
there's
a
few
things
I
can
make
it
clear
so
because
first
in
the
kobe
edge,
I
have
to
admit
that
it
is
much
more
complicated
than
our
system
because
it
provides
more
functionality.
They
they
they
provide
kind
of
the
the
edge
device
management
which
we
didn't
touch
at
this
moment
can
simplify
the
problem,
so
there
could
be,
as
you
look
at
the
entire
you
know
architect
there.
I
would
say
that
about
one
third
of
the
components
are
trying
to
handle
the
device,
so
the
performance
wise.
C
I
don't
think
it
is
not
in
a
data
plane,
it's
all
the
control,
plane
kind
of
thing,
so
the
performance
is
not
that
critical
from
my
perspective,
yeah.
So
the
if
you
look
at
the
way
that
we
we
deal
with
it,
it's
kind
of
the
thing.
If
you
look
at
yeah
there
there's
some
for
from
the
laws,
because
you
are
doing
the
proxy.
There
are
some
kind
of
problems
lost,
but,
as
I
said
it's
a
it's
control
plan
is
you.
C
You
probably
have
some
a
little
bit
delay
on
on
on
get
the
logs
of
the
part,
maybe
not
not
noticeable
at
all
that.
That
is
thing
that
you
can
think
of
other
than
that.
I
don't
see.
I
don't
see
there
is
a
performance
issue
there,
the
pre,
the
the
resource,
consumption.
That
indeed
can
be
a
problem
because
kobe
edge
their.
C
They
claim
their
binary.
They
put
everything
in
a
binary
and
the
binary
is
kind
of
fit
a
50
thousands
of
microbites.
But
but
in
my
perspective,
as
long
as
you
use
darker,
the
memory
will
be
blowed,
so
it's
darker
is
not
controlled
by
you,
yeah,
so
yeah
so
yeah,
but
kubernetes.
C
So
our
solutions
may
consume
more
resources
on
on
edge
node,
that
is
for
sure,
but
one
trend
that
I
we
were
finding
or
when
we
talked
to
a
customer.
Yes,
the
resource
was
a
limitation.
A
few
years
ago,
like
I
said,
but
nowadays,
typically
in
the
edge
side
or
there,
they
kind
of
have
a
moderate.
You
know
powerful
machine
to
run.
This
kind
of
components
is
kind
of
normal.
It's
not
so
the
resource
is
not
very
hard
constraint
for
our
primary
use.
Cases
got
it.
Okay,
thanks.
D
Sorry
this
is
diana.
I
have
a
follow-up
question
to
this
whole
thing,
just
curious
that
when
you're,
comparing
a
cubie
edge
to
open
your
is
one
of
them
designed
more
for
being
disconnected
from
the
network
for
longer
periods
of
time
or
is
it?
Is
that
similar
between
the
two?
I.
C
Think
for
that,
in
that
aspect
they
are
similar
because
avatar
validity
nasa
couple
slides,
but
in
general
we
are
going
to
both
both
will
catch
the
cloud
side
states
in
the
local
so
which
means
in
both
systems.
If
you
have
you
don't
have
a
cloud
edge
communication,
the
the
ad
side,
it
still
can
be
run.
D
Okay
and
then
like
how
long
can
you
be?
I
was
involved
in
something
like
this
recently,
where
an
edge,
but
there
was
a
problem
with
edge
systems
that
were
had
very
poor
internet
connection,
and
I
was
just
wondering
how
long
are
these
use
cases
wanting
that
disconnected
state
to
go
on
like
is
is
a
day
reasonable,
or
is
that
not
reasonable.
C
Today,
I
think
it's
okay,
but
I,
as
I
said,
sometimes
the
narrow
connection
problem
is
not
part
of
a
problem.
It's
it's
intentional.
We
we
see
some
cases
some
some
companies,
you
know
network
administrators,
just
cut
off
the
line
because
they
said
I
don't
want
any
public
cloud.
I
owning
allows
public
connection
connection
by
me.
I
only
allowed
a
few
minutes
once
I
needed
other.
Otherwise,
I
just
cut
off
the
line,
so
the
whole
problem
is,
I
think
the
system
will
work.
The
whole
problem
is
the
longer.
D
Okay,
yeah,
the
network
seems
to
be
like
a
bigger
problem
than
the
resource.
Like
you
said,
a
lot
of
these
edge
devices
have
incredible
resources
now
more
than
what
we
would
normally
have
in
a
server
in
a
data
center.
I'm
I'm
finding.
So
I
think
you're.
I
agree
with
you
on
that,
but
the
connectivity
still
seems
to
be
a
problem
for
people.
C
Yeah,
so
so
so
in
iot
they
have
a
few
models
which
I
didn't
cover
in
these
slides.
There
is
one
model,
is
direct
connect,
iot
device
directly
to
the
cloud,
which
is
not
a
model
that
we
are
seeing
because
we
are,
we
are,
we
are
more,
like
you
know,
edge
application
model
like
there
is
kind
of
a
little
bit
powerful
machines
running
the
cloud
side
and
the
iot
devices
connect
to
that
machine.
So
so
that's
the
reason
we
we
don't
have
to
worry
about
the
problem.
C
E
C
I
think
I
think
well
I
I
I
don't
have
a
direct
measurement,
but
I
I
would
say
some
random
about
like
one
or
two
core
and
the
two
gig
memory
this.
But
this
is
my
estimation
and
I
believe,
if
you
really
want,
we
really
want
to
run
something
really
reasonable
workload.
So
you
only
have
one
part,
maybe
I
don't
think
there's
a
reasonable
workload.
C
Hundreds
of
parts
kind
of
thing,
then,
that
that
memory
that
that
resource
should
be
required.
E
And
you
have
also
mentioned
that
the
docker
is
a
container
runtime.
Is
it
the
like
100
of
the
container
run
time
or
do
do
people
use
like
container
d
or
cryo.
C
B
More
question
here
in
the
well,
so
maybe
you'll
go
over
on
some
of
the
other
slides,
but
some
of
these
edge
devices
also
and
because
of
what
diane
mentioned
about
network
connectivity.
Some
of
these
devices
need
to
store
a
lot
of
data.
Sometimes
it's
like
video
right
so
or
like
lots
of
information.
C
B
So
I
guess
the
storage
would
will
depend
on
on
what
workloads
are
and
and
what,
whether
they
want
to
use
some
physical
or
volume
configure
with
kubernetes
right.
C
Again,
so
we
are
not
going
to
catch
in
the
workload
later.
We
are
catching
the
metadata
because
we
don't.
We
only
need
to
make
sure
if
the
the
cloud
and
the
outside
network
is
done,
the
kubernetes
component
can
work.
We
cannot
guarantee
that
the
agile
applications
still
work.
So
education
has
to
be
designed
in
a
way
that
if
they
have
their,
you
know
own
handle
of
their
narrow,
never
offline
to
a
center
centralized
place,
but
as
a
in
general,
they
are
distributed
algorithm.
They
run
distributed
algorithms.
C
C
Okay,
next
I'm
going
to
describe
the
young
hub.
This
figure
looks
a
little
bit
complicated,
but
in
general
I'll
try
to
go
through
it.
So
the
goal
of
the
behavior
ad
hop
is
we're
trying
to
achieve
a
goal,
which
is
the
application
availability
is
still
preserved
when
the
cloud
edge
networking
is
off
so
particularly
what
we're
trying
to
resolve.
That
is,
if
even
that,
if
the
card
edge
network
is
off
and
the
the
node
is
restarted,
the
kubrick
still
can
figure
out
what
are
the
parts
that
I
need
to
start.
C
So
the
idea
is,
we
have
a
harp
kind
of
proxy
all
the
traffic
from
the
kubrick
from
the
kubernetes
node
components
to
the
api
server.
So
the
idea
is,
it
will
check
if
the
network
is
online
or
no
offline.
If
it's
online,
it
will
send
a
request
directly
to
the
api
server
and
the
once
the
request
is
re
satisfied.
We
will
save
the
readers
update
the
results
to
a
cache
manager.
C
Basically,
if
you
have
a
good
networking
the
hub
will
harvest
keep
doing
is
keep
updating
the
the
api
servers,
the
the
object
status
by
monitoring
the
http
request
response
to
the
local
cache.
If,
if
the
hub
detects
that
the
the
last
connection
to
the
api
server,
it
will,
you
know
kind
of
serve
the
http
request
from
the
kubrick
or
not
components
by
looking
at
the
local
proxy
and
get
a
result
directly
from
the
cache.
C
C
Then
it
saved
all
the
list
objects
into
the
local
hash
and
keep
updating
that
every
time
there
is
new
list
request
so
next
time,
if
the
network
is
done,
if
the
company
sends
list
requests
again,
it
will
just
check
the
local
storage
and
give
the
all
the
parts
that
they
have
stored
back
and
from
the
list
requested
results
back
to
the
kubelet.
So
this
is
how
it
does
so.
Basically,
the
idea
is:
we
will
cache
the
cloud
manager
locally
through
the
yard
hub
into
the
local
disk.
C
So
currently
there
is
slightly
difference
here,
so
we
use
a
kind
of
simplified
mechanism
that
we
don't
use
a
database
here
we
still
use
the
kind
of
just
text,
the
the
local
file,
just
plain
plain
file
to
save
the
the
objects
in
a
logo
disk.
We
don't
use
a
database
here
there
are
the
there
are
a
few
reasons.
First,
is
the
simplicity
for
sure
second.
Is
that
the
data?
The
amount
of
data
for
the
metadata
is
not
a
big.
C
It's
not
a
big
number
because
think
about
how
many
parts
are
going
to
run
in
the
nodes,
maybe
just
a
matter
of
data
for
hundreds
of
nodes
maximally,
so
less
than
one
mic.
Kind
of
storage
is
kind
of
enough
to
save
those
data,
but
indeed,
but
but
again
there
are
lose
some
functionality
in
terms
of
the
benefits
of
other
benefits
of
the
database
like
a
reliability
kind
of
thing,
but
yeah.
That's
that's
what
we
choose
for
now.
C
We
just
save
as
a
file
in
the
local
node
is
for
all
the
object
that
we
cache
in
the
cloud
in
the
cross
side
that
they
need
to
have
some
other
problem,
because
we
recreate
the
parts
we
are
requiring
to
recover
the
ip
and
mac
stuff.
We
do
see
some
requirements
saying
that,
oh
once
you,
if
you
recreate
a
path,
please
keep
the
same
ip
on
the
mac
for
whatever
reasons,
but
we
we
were
trying
to
achieve
that.
C
But
that
requires
kind
of
supports
to
the
cmi
plugins
to
really
really
support
that,
but
our
all
kind
of
yacht
hub
and
kind
of
we
we
have
some
solutions
to
work
with
their
their
ci
plugin
to
make
sure
they
can.
They
can
keep
the
ip
and
mac
once
the
product
is
recreated.
C
So
this
is
pretty
much
the
the
high
level
view
about
the
how
yard
hub
handle
the
you
know.
The
node
autonomy
problem
basically
relies
on
cache
and
the
dual
proxy
mode
to
handle.
So
if
you
look
at
this
figure
regardless
hollow
your,
your
cloud
and
the
edge
network
is
off.
If
you
just
keep
off
there,
just
get
everything
from
the
local
cache
and
that
location
never
gets
refreshed.
So
that's
that's
kind
of
a
passive
passive
mode.
We
solve
a
problem
all
right,
so
the
next
part
is
the
tunnel
part.
C
C
There
are
a
lot
of
you
know,
tolerant
solutions
existing,
but
many
of
them
follow
the
same
pattern.
It
is
that
in
general
it
is
the
remote
agent
side
of
your
creator.
You
know
the
long
tcp
connection
with
the
server
then,
and
they
use
this
tunnel.
Then
this
localized
tunnel
and
every
request
comes
through
that
tunnel
to
bypass
all
the
you
know,
firewall
sitting
between
the
agent
and
the
server.
C
So
we,
instead
of
you,
know
doing
our
own
tunnel
server
implementation.
We
decided
to
leverage
the
kubernetes
api
server
network
proxy,
which
is,
I
think,
from
the
version
1.18.
They
start
to
support
this.
C
The
api
called
incred
ingress
selector,
which
means
by
sending
that
all
you
know,
network
traffic
going
outside
of
the
api
server
can
be
redirected
to
our
the
api
server
proxy
which,
which
is,
in
short,
is
amp.
C
We
leverage
the
amp
implementation
to
to
do
to
implement
the
tunneling,
but
there
are
a
few
things
because
the
amp
requires
a
a
grpc
protocol,
so
in
any
api
server
I
think
after
118
the
api
server
can
send
the
request
through
the
grpc.
But
before
that
there's
you
just
send
the
planned
http
request.
You
supported
the
older
versions,
so
we
in
we
kind
of
implement,
we
call
the
interceptor.
C
So
you
can
see
it
just
like
a
very
simple
translator,
kind
of
thing
it
just
just
encapsulate
the
order
in
http
requests
to
becomes
a
grpc
request,
that's
all
about
it
and
the
amp
server
and
amp
agent.
They
are
pretty
much
the
upstream
components,
but
we
do
have
some
additions
and
changes
on
that
in
the
sense
that
they
want
you
so
in
in
upstream
amp
server.
They
they
will.
They
will
choose
a
random
amp
agent
to
build
the.
C
Establish
the
tunnel
between
the
multiple
mp
agent,
the
t1
amp
server
and
the
mp7
will
randomly
pick
one
agent
to
do
the
network
transformation.
But
in
our
side
we
have
to
do
a
very
explicit
the
routing,
because
if
they
want
to
access
the
node
ace
couplet,
they
have
to
go
through.
The
traffic
has
to
go
through
to
the
node
a's
api
amp
agent.
C
So
so
so
in
our
project,
we
kind
of
add
in
the
routing
policy
and
amp
and
to
make
sure
that,
and
they
will
find
the
right
connections
to
connect
to
together
the
to
to
communicate
with
the
kubrick
running
in
the
edge
node
from
the
cloud
side
and-
and
we
are
trying
to
up
streaming
part
of
the
our
solution
in
the
emp
communication
as
well,
because
this
is
kind
of
not
the
requirement
by
by
our
own
it
by
also
requires
it
is
also
requested
by
other.
B
One
question:
the
interceptor
is
required
to
handle
the
old
api
versions
right.
So
when
it's
is
that?
Because
some
people,
you
know,
have
to
have
to
keep
different
versions
of
kubernetes
or
what
yeah.
C
C
So
in
that
case,
the
api
sent
requests
coming
out
of
the
api
server
is
http
request,
so
the
amp
server
required
input
to
be
the
grp
jrpc
format.
So
we
just
use
it
in
acceptor.
You
can
see,
treat
it
as
another
proxy.
Just
do
a
protocol
conversion
from
http
to
grpc
and
that's
it.
There
was
no,
no,
no
got
it
got
it.
C
Okay,
so
yeah,
okay,
the
next
one
is
the
program
management.
So
the
goal
is
we're
trying
to
have
a
modeling
for
pull
of
nodes,
and
now
we
can
do
a
pull
based
deployment
application
management.
C
So
this
this
part
is
pretty
much
leverage.
The
existing
way
of
the
kubernetes
is
extending
kubernetes
by
using
the
crds,
so
in
general,
so
we
we
introduced
a
node
for
crd
to
manufacture
a
pool
of
nodes
and
we
introduced
a
new
workload
called
the
united
department
to
management
workloads
across
different
pools.
C
So
so,
which
means
if
people
define
say,
okay,
we
have
a
pool
a
and
pool
b
and
we
register
for
an
mplb
in
the
united
department
customer
resource.
This
controller
will
generate,
will
create
and
manage
two
kubernetes
deployments,
one
respect
to
the
poorest
nodes
and
one
just
respect
to
the
one
respect
to
the
previous
nodes,
and
you
can
specify
the
different
replica
replica
numbers,
employee
and
probe,
and
you
can
upgrade
them
in
the
united
way
or
you
use
in
different
ways.
C
It's
up
to
the
customers
so
and
and
we
and
we
also
leverage
kubernetes
topology
web
service
to
bond
the
east
west
traffic.
So
in
the
past
the
service
model
in
kubernetes
there
is
no
topology.
Basically,
if
you
have
a
service
class
class
ip
types
of
service
this
you
know
this
service
routing
rules
will
be
updated
in
the
ipad
of
all
nodes.
There's
no
notion
of
the
topology,
so
I
think
in
kubernetes,
one
after
118
there's
a
new
feature
called
topology
web
service.
So
this
kind
of
fits
align
with
our
requirement.
C
Then
then,
if
the,
if
the
service
ending
points
trying
to
talk
each
other,
they
are
bound,
they
are
bonded
to
the
same
kind
of
pool
of
nodes,
which
is
a
very,
very
good
feature
and
to
fit
our
use
cases.
B
C
Exactly
so,
you
probably
can't
have
two
factories
they're,
not
mobile,
but
they
belong
to
the
same
company,
so
you
can
say
you
guys
think
about
it.
That
way
got
it.
Okay,
all
right.
So
next
I've
bring
up
some
use
cases.
I
I
I
won't
give
some
exactly
the
examples
of
the
use
cases,
but
pretty
much
currently
on
the
open
yard
is
used.
You
can
think
about.
You
can
think
of
it
as
a
edge
pass
call,
so
it
it.
C
It
has
a
it
becomes
the
foundation
to
support
quite
a
few
other
past
platforms
that
built
on
top
of
the
resources
that
that,
in
the
ad
side,
for
example,
we
have
use
case
that
supports
the
iot
plus,
which
is
iotpress,
is
a
device.
Is
a
parsing
manager,
iot
device.
Very
specifically,
then
now
this
iot
process
itself
has
quite
a
few
some
components
that
someone
needs
to
distribute
it
to
the
edge
side.
Some
needs
to
be
distributed
in
the
cloud
side,
so
we
run
everything,
so
we
so
in
that
profile
all
develop.
C
All
the
deployment
model
is
converted
to
container
and
using
kubernetes
and
using
the
kuvi
yard
system
to
manage
all
the
outside
node
and
use
the
covalent
yard
system
magic.
You
can
signal
by
using
open
url
to
management,
to
manage
the
components
and
that
that
below
to
the
parts
that
need
to
run
in
a
remote
site,
the
same
thing
happens
in
the
cdn
on
a
content,
library
delivery.
Also,
there
are
quite
a
few
services
that
need
to
that.
That
needs
to
run
in
the
remote
side
and
and
the
last
one
is.
C
We
also
have
a
uses
that
connect
your
af
platform.
Well,
a
lot
of
you
know
the
algorithms
run
a
iai
applications
that
need
to
roast
rows
tools
or
applications
are
encapsulated
into
the
kubernetes
workload
and
are
deployed
in
the
edge
nodes.
So
so
from
high
level
view
is
like
open
yard
is
kind
of
served
as
an
edge
pass
core
to
serve
other
edge
platform.
C
All
right,
the
last
I've
ever
briefly
talked
about
this
intel
committee.
So
open
air
is
kind
of
a
new
project,
so
it
only
have
an
age
of
about
six
months.
It's
kind
of
it.
It
has
a
small
community.
At
this
moment
we
are
still
trying
to
use
a
kind
of
normal
or
standard
way
of
the
driving
the
project
ongoing.
We
will
have
a
quarterly
based
release
and
we
set
up
a
kind
of
roughly
six
month
roadmap.
C
Currently
we
will
have
a
bi-weekly
meeting
and
nowadays
so
every
meeting
roughly
we
have
about
20
attendant
attendees
at
this
moment
and
thanks
for
the
cncf,
because
he
it
was
it
accepts
to
the
afternoon
donate
you
know,
opening
up
to
cncf.
We
got
more
chance
to
collaborate
with
other.
You
know
companies.
Now
we
have
a
kind
of
interaction
with
intel,
vml
and
a
level
group
we
are
trying
to.
C
B
Thank
you.
Thank
you
for
presenting
very
good
presentation,
and
so
one
more
question,
one
more
question
about
the
the
adoption.
So
are
there
any?
Is
there
anybody
using
the
project?
Already
I
mean
you,
you
mentioned
like
people
using
it
for
or
the
use
cases
for
ai
and
cdn.
B
So
any
of
these
people
who
are
contributing
using
it
or
or
other
people
that
are
not
necessarily
contributing.
C
Yeah,
so
so
so
so
a
few
things,
so
basically
we
have
internally
in
alibaba
cloud
there
is
a
product,
the
cloud
product
which
is
entirely
built
based
on
the
open
source
version
of
open
yard.
They
are
pretty
much
the
same
so
and
and
all
the
customer
of
that
product.
So
you
can
say
that
they
are
directly
using
the
opengl
at
this
moment
so
which,
which
is
the
philosophy
that
we
have
we're
trying
to.
You
know,
make
sure
everything
is
consistent.
So
then
we
can
apply
the
same.
C
B
Right
got
it
and
is
there
anything
that
the
community
is
is
doing
to
to
get
more
adoption
like
reaching
out
to
some
places
who
are
having
this?
These
challenges
and
sure.
C
Sure
so,
there's
a
reason.
We
are
saying
that
if
you
look
at
the
collaborator,
so
so
we
so
it
it's
kind
of
nature,
so
intel
vmware,
they're,
they're,
looking
they're
coming
to
us
to
say
they
have
similar
use
cases
either
in
their
production
line
or
in
their.
So
pretty
much
is
in
their
production
like
so
all
the
intel
and
the
vmware
have
their
own.
You
know
solutions
to
handle
the
edge
computing.
It's
not
a
new
problem,
it's
a
long
standing
problem
and
they
build
their
own
solutions
already,
but
the
problem
is
they
cannot.
C
They
cannot
find
the
good
collaboration
with
the
the
kubernetes
community
because
their
solution
are
pretty
much
standing
alone
and
they
they
are
not
building
for
kubernetes
for
sure
now,
they're
trying
to
find
the
you
know
the
collaboration
to
to
integrate
with
their
solution
with
a
solution
in
kubernetes.
That's
why
they
they
reach
out
the
open
yard,
the
primary
reason
they
choose
open
yard
or
not
choosing
the
kubiaj
only
they
have
tried
is
because
of
the
compatibility
they
like
our
design
of
not
breaking
the
api
compatibility.
B
Yeah
well,
thank
you
for
presenting
looking
forward
to
to
the
project
growing
and
getting
more
adoption
already
in
the
cncf.
So.
C
I
think
I
think
yeah
we
probably
will
leverage
the
power
of
since
they
have
in
a
more.
You
know
nice
way,
because
this
is
a
good
platform
to
you
to
let
people
know
this
project
and
and
we
are
trying
to
yeah
so
that
the
the
some
of
the
problem
is
that
if
you
look
at
our
meeting
it's
kind
of
asian
pacific
time,
you
know
friendly.
So
now
it
is
not
a
very
because
nowadays,
most
of
the
customer,
even
for
this
collaborator,
they're
they're
coming
from
china
set.
C
But
I
do
I
I
leave
my
you
know,
connection
information.
If
you
need
so
we
have
a
slack
channel,
so
people
may
just
if
they
couldn't
attend
a
meeting.
They
can
connect
me
and
I
can
point
to
all
the
making
records
or
ask
me
the
question
directly.
B
Yeah
sounds
good
yeah
yeah.
You
can
send
me
I'm
on
slack
on
the
cncf
slack,
so
you
can
see
that
information
and
I
can
post
it
on
the
meeting
notes
here
so
basically
to
add
more
yeah
details
on
people
how
to
contact
you
and
yeah.
C
Yeah,
just
the
slack
is
that
omiyage
has
its
own
snack
channel.
So
if
people,
if
you
they,
they
just
visited
the
github,
they
can
get
it
so.
B
Yeah,
but
you
do
have
your
own
slack,
I
mean
but
yeah
you
do
or
you
do
or
you
don't
or
you
just
use
this
agency.
I
have.
C
I
have
but
yeah,
but
I
I
I
I
would
say
the
same
thing
so
you
you
send
the
message
in
the
yeah.
That's
like
channel,
I'm
going
to
see
it.