►
From YouTube: KubeEdge Community Meeting 20200603 (Pacific Time)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
starts
so
today
is
june
3rd
beijing
time.
So
this
is
the
coverage
committee
meeting.
So
now,
let's
start.
A
Here
is
the
agenda
of
today.
So
if
you
have
anything
just
enter
in
add
to
this
agenda,
we
can
talk
about
it.
So
first,
let's
go
over
the
1.3
release,
so
what
we
have
for
1.3
radius
already
published.
However,
they
are
still
going
on
some
fix.
So
now
is
the
1.4
release.
Let's
go
over
one
by
one
verify
integration
with
cutter
container:
we
don't
have
our
owner
here.
So
does
anyone
volunteer
to
do
that.
A
B
This
is
calvin
from
linarea.
I
have
interested
to
to
do
this
so
verify
integration
with
color
containers.
A
A
A
So
if
you
want
you
can
create
a
issue
number
on
the
github.
So
then
we
can
track
that
perform.
A
Create
an
issue
so
we
can
have
easy
tracking
and
the
second
one
is
support
the
cooper
control
as
a
good
to
edge.
I
implement
this
one.
I
think
john
g
is
working
on
that
right.
I
don't
see
him
today,
so,
let's
to
fade
this,
your
metric
serving
cloud
right.
So
how
does
it
going
yeah
yeah
we're
still
working
right
now.
A
So
let's
go
over
the
next
one,
it's
still
pending,
we
are
just.
I
think
this
is
because
this
is
pending,
because
the
kubernetes
still
have
I
still
upgrading
to
1.114.
We
are
just
after
the
kubernetes
upstream
combination
change.
Then
we
are
going
to
follow
to
upgrade
to
goal
line,
1.14
gateway
support.
A
C
A
C
A
Again
the
yellow
is
implementing
pending.
I
think
this
one
okay,
then,
let's
go
over
this
one
installation,
it's
a
documentation
is
in
implies
ongoing
and
kubernetes
1.18.
Do
we
have
a
plan?
Do
we
change
to
implementing?
I
don't
think
we
want
to
keep
that.
A
Okay,
let
me
take
that,
does
the
certificate
the
rotation
do
we
do
we
decide
to
go
forward
or
we
just
do
keep
its
pending.
I
think.
C
A
A
A
A
That
I
still,
I
think,
that's
still,
let's
change
the
color.
A
A
A
Next
one
is
the
the
lint:
let's
improve
the
order,
formatting
everything
so.
C
A
C
A
Spend
more
time
on
this
okay,
and
we
need
a
issue
for
this.
A
C
C
And
chevy-
and
I
will
work
on
this
so
basically
this
is
to
to
reduce
the
like
the
the
cooldown
time
period
when
a
node
suddenly
gets
disconnected
and
trying
to
re-connect,
and
also
this
will
be
very
useful
for
later
on.
We
make
the
cloud
hub
able
to
scale.
C
A
One
message:
queue
on
roadmap:
hey
jan,
do
you
want
to
talk
about
that?
Let
me
open
the
github.
D
Yeah,
so
this
item
popped
up
where
we
had
an
internal
discussion
on
the
ml
offloading
framework,
what
feature
we
would
need
from
the
kubernetes
side.
So
then
the
the
message
queue,
support
between
the
the
cloud
and
the
edge
node
and
also
among
the
edge
known,
seem
to
be
the
one.
We
really
need
right
now
for
our
discussion.
It
seems
there
is
some
communication
mechanism,
mostly
a
network
at
the
network
layer.
D
So
you
can,
you
can
get
the
I
key
from
the
service
registry
of
the
target
service
and
you
can
basically
set
up
your
own.
I
guess
you
can
set
up
your
your
own
message,
queue
and
and
start
to
use
it,
but
we're
wondering
whether.
A
D
No,
not
focused
itself
is
for
application.
Basically,
it's
a
it's
a
messenger
queue
from
the
competitive
side
open
to
to
the
application
there
are.
There
are
options
to
do
this
at
least
one
way
is
we
just
you
know,
integrate
a
independent
message
queue.
We
pick
one
like
whatever
the
the
maybe
the
lightweight
one.
I
don't
know
the
the
rabbit
and
those
things
or
a
heavyweight
one.
So
then
this
will
become
the
patient
responsibility
right.
It's
it's
quite
a
hassle.
D
So
that's
why
we
hope
you
know,
as
the
platform
can,
can
integrate
a
integrated
and
test
a
lightweight
master
cube
for
us
to
consume
and
then,
from
their
kitchen
side
we
can
use
their
mascule.
For
our
you
know
any
type
of
communication.
It
can
be
the
management
communication
within
the
application
or
it
can
be
the
the
data
communication
data
message,
communication
within
the
application.
D
So
ian
mentioned
that
there
is
a
roadmap
item
for
this.
We're
just
wondering
whether
you
know
some
support
of
this
can
be
prioritized,
because
this
seemed
to
be
very
basic.
Without
this,
that
means
we
have
to
from
the
application
design.
We
have
to
do
all
this
work
and
really
that
is
not
we
have
planned
for.
A
So
yeah
we
have
the
easter
based
service
mesh
so
and
the
istio
plus
kafka
is
a
kind
of
a
one
of
the
solution
could
be.
I
don't
think
the
message
queue
we
want
to
have
message
queue
only.
I
think
we
should
have
the
implement
with
the
ecu
service
mesh
together
or
one
may
not
used
to,
but
the
service
mesh
deployment
together
to
enhance
the
edge
mesh
and
also
this
could
apply
not
only
I
used
to
add
a
composition
but
a
cloud
edge
application
communication.
I
think
that's
me.
D
D
My
thought
is
from
the
application
side.
You
know
the
the
service
mesh
included.
D
Of
course,
that's
that's
ideal,
but,
right
now
our
focus
is
on
getting
a
message:
queue
interface
on
the
backhand
side,
whether
that's
that's
combined
with
the
service
mesh
or,
let's
just
do
a
simple
message:
queue
integration.
I
think
either
way
it
worked
for
us.
D
It's
it's
up
to
the
community
to
decide.
You
know
how
we
should
proceed.
I
guess
my
point
is
from
the
application
side.
We
don't
necessarily
you
know,
need
a
full-blown.
You
know
full-featured
message
queue,
but
we
we
want
some
some
simplified
message:
queue
service
there.
So
we
don't
have
to
mess
with
you
know,
integrating
message
service
ourselves.
C
Well,
I
think
to
so
this
is
for
the
application
community
communication
right,
I
think,
actually,
integrating
with
a
message
queue
is
easy,
but
what
we
need
to
do
is
the
make
it
able
to
communicate
across
age
and
educate
your
cloud
and
age.
C
So
that's
why
currently
we
are
trying
to.
We
are
prioritizing
the
the
service
mesh
to
make
it
able
to
to
to
work
over
a
different
network
environment
like
cross,
subnet
or
or
inside
subnet.
A
Yeah,
I
think
that's
another
issue
right
so
that
we
it's
not
hard
to
deploy
a
message
queue
to.
C
If
there's
no
special
requirement
for
the
underlying
network
environment,
for
example,
if
you
were
if
the
the
case
is
that
all
the
agent
notes
are
inside
the
same
subnet,
it's
easy
to
deploy
a
message.
A
third
party.
This
message
queue
and
make
it
work.
A
C
So
currently,
one
thread
under
discussion
is
that
we
want
to
find
a
way
to
make
the
make
make
the
communication.
I
mean
the
the
service
discovery
and
the
service
communication
work
across
a
different
subnet.
C
I
think
that's
fundamental,
then
it
really
depends
on
how
the
application
want
to
communicate
with
the
other
instance.
C
D
Yeah,
let
me
give
an
example
right
so,
let's
say,
for
example,
from
the
education
point
of
view,
all
I
care
would
be.
You
know
I
want
to
pass
a
message
over
right.
I
don't
want
to
you
know,
write
write
the
communication
message,
communication
at
the
htm
or
websocket
level
right,
so
that
means
like
buffering
re-transmit,
all
those
things
right.
So
let's
say
I
I
want
to.
I
want
to
use
something.
D
A
third
party
I
just
pick
and
pick
a
example
like
I
want
to
use
right,
so
I
can
use
kafka
and
then
moving
forward.
My
application
will
be
tied
to
kafka,
but
I
really
don't
know
whether
kafka
is
a
good
fit
for
the
kubernetes
environment.
If
it's
in
the
cloud,
maybe
I
I
have
less
concern
because
in
the
cloud
a
lot
of
message
queues
are
being
verified
that
you
know
approved
is
working,
so
I
have
less
concern,
but
with
the
cool
badge
environment
now,
which
one
should
I
pick
is
it?
D
Is
it
good
for
the
kubernetes?
I
don't
want
to
go
through
all
the
you
know
these
hassle
and
then
figure
out.
Okay.
This
is
not
a
good
one
too.
For
for
cool
badge,
environment,
so
that's
the
help
I
I
I
was
hoping
you
know
as
a
coup
edge
would
would
provide-
or
at
least
as
a
community
can
can,
can
think.
This
would
be
a
prioritized
work
item
and
then
the
whole
service
mesh,
the
the
underlying
you
know,
network
support,
cross
node.
D
Those
that
is
is
it's
not
exactly
the
same
thing
as
the
message
cool
layer.
Massive
message
layer
is,
is
a
fairly
you
can
consider
it's
an
overlay
layer
on
top
of
the
network.
I
think
the
all
the
work
where
the
community
is
trying
to
do
in
the
service
mesh
and
network
layer-
that's
very
essential.
I
think
that's
very,
very
important.
D
The
masjid
support
I'm
talking
about
is
is
something
overlay
on
top
of
that.
So
I
don't
know.
If
I
guess
it's
more
this,
this
may
not
be
if
we,
if
we
integrate
a
third
party
message
queue.
This
may
not
be
a
lot
of
development,
rather
a
verification,
you
know
analysis
type
type
of
work,
just
like
it's.
It's
almost
like
when
you
guys
develop
the
beehive
right.
So
why
is
behind?
Why
why
you
didn't
pick
up
something.
C
D
So
it's
the
same
thing
so
for
a
message
service,
the
debate.
What?
If,
if
you
plan
to
offer,
then
that
work
is
what
I'm?
What
I'm
referring
to?
I
guess
the
point
is
that
is,
in
my
mind,
different
from
the
service
management
network
support,
although
those
are
equally
important
or
even
way
more
important
than
the
message
queue
the
community
can
decide
or
there
could
be
another
possibility
that
the
committee
may
even
decide
to
say
rescue.
We
don't
want
you
know
we
don't
want
to
mess
with
that
application.
D
You
decide
whichever
you
you
want
to
use,
and
you
package
it
within
your
application.
That's
another
decision,
I'm
looking
for
a
kind
of
a
a
roman
after
decision
thought
process.
Part
on
this
message,
cue
stuff,
because
the
I
think
the
for
the
iot
implementation
support
you
guys
did
a
great
job
to
to
to
to
support.
You
know
a
lot
of
you
know
messaging.
D
Those
things
are
relevant
to
to
the
iot
space,
but
now,
when
we
move
to
something
non-iot,
either
the
mec
or
the
the
ml
loading
ai
framework,
a
more
generic
message,
queue
would
be,
at
least
from
our
requirements.
Is
one
of
the
several
most
important
ones?
We?
We
would
need.
D
Hence
today's
question:
I
don't
I'm
not
looking
for
an
answer
today.
It's
just
a
kind
of
a
voice
from
the
from
the
application
side.
What
we
are
looking
for
from
the
coverage.
C
I
think
that's
very
helpful
to
the
community
and
to
provide
the
information,
and
so
currently,
I
think,
from
the
infrastructure
layer.
So
actually
we
don't
want.
We
don't
want
to
make
the
decision
bind
on
any
certain
implementation
so
like
we,
we
want
to
provide
the
best
support
to
as
many
as
possible
cni
plugin
implementations.
C
C
C
C
A
A
Maybe
we
can
put
something
into
our
cranial
project
say:
okay,
that's
by
example.
I
think
that's
using
the
chrono
blueprint
is
very
available
to
show
our
successful
example
and
all
this
combination
and
we
integrate
all
kind
of
open
source
technology
together
to
build
a
really
working
and
useful
example.
D
A
Yeah,
the
next
topic
is
about
the
arm
dev
summit,
that
one
we
are
focusing
on
kobe
edge,
easter
ui
and
I
think
tina.
Do
you
want
to
give
a
quick
intro?
I
don't
think,
and
not
I
don't
think
everybody
know
that's
the
cloud.
E
Yeah
in.
E
October,
the
first
week
there
will
be
an
arm
depth
summit,
so
this
is
a
director
summit
for
all
the
developers
to
come
and
talk
about
cloud
native
things
we
have
a
big
track
is
cloud
native.
E
We
are
proposing
to
have
we're
gonna
submit
a
proposal,
it's
workshop
proposal.
There
are
three
types,
workshop
session
presentation
or
the
panel,
so
the
workshop
we
propose
has
could
be
edge,
psyllium,
envoy
and
istel,
so
we
have
the
title
like
cognitive
edge,
computing
or
service
mesh
networking
process,
etc,
and
that's
why
I
think
yeah
here
you
go,
that's
why
we
wanna
have
like
a
little
bit
upper
abstract.
E
It's
a
workshop
is
to
give
a
developer
a
hands-on
experience
so
like
because
the
summit
is
summit
on
dev
summit,
so
we
gotta
mention
some
parts
like
how
it
can
run
on
both
as64
and
x86
right.
So
we
wanna
make
a
very
appearing
abstract,
so
it
can
be
accepted.
So
we
can
put
these
four
related
relevant
projects
all
together.
E
So
the
directors
and
attendees
have
one
place
to
get
the
very
smooth
experience
of
cloud
native
development.
A
Thank
you,
tina
yeah
kevin.
Did
you
see
tina's
proposal,
so
I
think
there's
a
good
chance.
We
can
evangelize
the
the
kubad
project.
So
let's
come
up
with
some
idea
in
the
universe:
yeah
yeah,
yeah.
E
C
C
Yeah,
I'm
I'm
thinking
about
yet
that
I
don't
have
the
answer
right
now,
but
I
want
to
highlight
that
so
are
these.
C
Primary
architecture
we
want
to
to
to
offer,
especially
for
the
age
age
components,
so
it's
definitely
a
very
good
opportunity
to
to
to
evangelize
qh
and
also
tie
cuba
genome
together
more
closely.
A
Okay,
I
I
think
we
can.
We
already
have
the
group,
let's
discuss
that
for
more
okay,
so
tina.
Do
we
have
a
deadline
for
this
abstract?
Your
workshop.
E
Yes,
june
9th
has
to
be
submitted
by
junais.
A
Okay,
we
have
about,
we
probably
need
to
tonight
we'll
be
next
monday.
So,
but
I
think
we
should
finish
this
week,
so
yeah.
A
A
A
C
Don't
have
much
so
just
a
very
quick
thing
that
the
china
summer
2020,
currently
all
the
project
and
the
tasks
are
finalized.
So
it's
now
open
for
the
student
to
apply
for
the
tasks.
A
Yeah,
let's
could
you
I
think
we
have
some
bear
here
right
this
one
right.
C
Yeah
so
for
these
students
I
I
would
also
want
to
clarify
that
it's.
C
Either
the
students
in
a
either
chinese
students
or
or
the
students
are
are
going
to
the
universities
in
china
is
okay
to
apply
for
that.
A
Oh
for
the
one
thing
kevin
for
the
the
virtual
cooper
count,
some
in
china,
so
that
virtual
booth.
So
what's
the
format,
because
we
don't
have
enough
information
to
build
one.
A
C
I'm
also
personally
contacting
with
the
foundation
guys
to
to
have
more
details,
so
the
virtual
booth
that
so
I
also
give
some
suggestion
that
we
try
trying
to
replicate
the
the
idea,
the
style
of
real
world
booth
and
the
exhibition,
or
so
so
probably
the
foundation
will
finally
decide
if
they
want
to
do
in
that
way.
So,
like
every
so,
every
sponsor
would
get
a
fixed
size
of
the
booth
depends
on
their
sponsor
level.
A
C
So,
actually,
I
think
that
the
thing
tricky
is
the
the
virtual
exhibition
hall.
So
currently
they
have
don't
have
any
idea
on
this,
yet
they
actually
have
a
preview
page
about
the
the
virtual
booth.
I
mean
the
the
thing
inside
the
virtual
booth,
so
they
they
they're,
trying
to
provide
a
kind
of
standard
standard
setup.
A
A
C
That
that's
still
under
discussion.
We
know
you
decided,
but
I
think
yeah
yeah.
We
will
think
about
that.