►
From YouTube: IETF113-CAN-20220322-0900
Description
CAN meeting session at IETF113
2022/03/22 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
Very
good,
okay,
so
this
is
the
computing
aware.
Networking
buff,
linda
and
me
are
the
co-chairs
for
today
we'll
get
started
right
now.
A
First
note:
well,
please
note
that
you
agree
to
follow
the
ihf
processes
and
policies,
including
ipr
policies,
and
also
please
be
cautious
to
your
colleagues
here-
escape
the
details
here.
A
Some
meeting
tips,
if
you
are
here
in
person,
please
join
the
onsite
session
of
mid
echo.
If
you
are
remote,
make
sure
your
audio
and
video
are
off
unless
you
are
presenting.
A
Or
skip
the
resources
for
this,
so
this
path,
what
it
is
about
the
computing
and
we're
networking
aims
at
the
computing
and
networking
resource
optimization
by
steering
traffic
to
appropriate
computing
resources,
considering
not
only
the
routing
metric
but
also
computing
resource
metric
and
service
affiliations.
A
So
the
purpose
of
this
path
is
to
increase
the
awareness
and
and
participation
in
computing
and
where
networking
so
that
efforts
can
be
organized
coherently.
At
this
stage
of
researching
to
early
engineering
transition,
we
will
have
a
few
words
from
our
a.d
jung
later.
A
During
this
bob,
we
want
to
have
these
questions
to
have
in
mind
so
that
when,
when
we
come
to
the
end
of
the
discussion
meeting
to
this
in
the
discussion
phase
you
will
have
some
sort
of,
for
example,
are
the
use
cases
important?
Are
there
already
existing
insufficient
solutions?
A
A
So
we
will
for
the
agenda,
we'll
start
with
the
problem,
statements
and
use
cases
and
requirements,
and
then
we're
talking
about
two
potential
solutions.
One
is
load
balancer
based
the
other
one
is
this
dynamic.
Anycast
architecture
will
then
come
to
the
open
discussion
for
15
minutes
and
then
finally,
we
return
to
those
questions
with
shares
and
ideas
to
facilitate
the
discussion.
There.
A
I'm
sorry
it's
hard
for
you.
Oh
okay,
I'll
speak
louder
and
closer
okay.
So
I
think
for
now
we'll
just
turn
to
our
ad
john
scott.
D
Thanks
jeffrey
good
day,
everybody
thanks
for
being
here
yeah,
I
don't
have
a
ton
to
say
beyond
what
jeffrey
already
said
to
introduce
the
the
buff.
So
thanks
to
to
jeffrey
and
linda
first
of
all
for
agreeing
to
share-
and
I
think
they've
done
a
great
job
of
pulling
them
off
together,
yeah.
So
when
we
were
approving
the
buff,
there
was
some
you
know,
sort
of
it
seemed
like
the
the
history
and
discussion
was
largely
around.
D
Is
there
you
know
well
actually
the
basically
the
questions
that
jeffrey
already
projected,
which
is?
Is
this
new
work?
What
group
should
it
land
in
and
is
there
is
there
energy
to
tackle
it?
So
that's
what
I
hope
the
the
participants
will
stick
with,
in
particular
in
the
ietf.
We
have
a
a
love
of
immediately
diving
into
debating
about
the
details
of
solutions,
and
I
hope
we
won't
do
that
today.
D
Although
debating
about
solutions
is,
of
course,
very
much
fun,
so
yeah,
please,
please
do
keep
in
your
minds
as
we're
as
we're
going
to
dip
off
today.
The
questions
that
jeffrey
just
projected-
and
I
hope
we're
going
to
come
away
with
this
with
some
idea
of
what
our
next
steps
are.
E
I
just
want
to
add
a
few
words
for
people
who
speak
or
answer
questions.
Please
check
the
notes,
make
sure
your
questions
and
answers
are
captured
correctly
and
also
your
names.
A
Okay,
thank
you.
So
now
we
will
get
started.
The
first
presentation
is
updates
on
the
religious
industry
and
standard
organizations.
D
So
sorry,
jeffrey,
let
me
interrupt
for
just
one
moment
with
a
question
for
the
chairs.
I
see,
there's
a
hand
raised
in
the
queue
and
I
just
maybe
you
can
say
a
few
words
about
how
you
want
to
manage
the
queue
and
you
know,
q,
a
during
talks
and
so
on.
E
So
I
hope
that
the
questions
during
the
presentation
only
for
clarification,
like
say,
for
example,
you
can
ask
hey.
I
heard
you
said
this:
what
do
you
mean
for
the
debate?
Let's
wait
to
the
open
discussion
about
alternative
solutions
and
different
ways
of
achieving
things.
F
Yes,
my
name
is
georges
garagianes
from
hawaii
I'd
like
to
mention
that
we
had
some
side
meetings
that
are
somehow
related
to
you
know
to
the
content
of
this
book,
and
I
like
to
so.
We
had,
I
think,
one
on
during
the
itf
110
and
another
one
on
the
atf
109.
So
the
name
of
the
side
meeting
was
dynamic,
indicas,
so
dance,
and
we
had
you
know
during
the
outside
meeting
the
last
one.
F
We
had
actually
also
three
questions
where
the
use
cases
that
were
provided
there
were
well
useful.
So
there
were
some
40
from
60
people
that
were
in
the
same
meeting
said.
Yes,
then,
from
there
was
also
a
question
on
whether
you
know
the
content
of
the
of
of
the
diecast
was
related
to
surrounding
area.
So
there
was,
I
think,
some
47
percent,
that
from
the
people
in
the
side
meeting.
F
G
H
G
G
The
initial
idea
of
of
mech
you
saw
in
the
title
when
meg
was
conceived,
was
the
idea
of
exploiting,
let's
say
it's
computing
capabilities
distributed
across
the
the
network?
Basically
thinking
on
mobile
access
right.
This
is
why
the
initial
acronym
of
macbooks
is
standard
for
mobility
computing,
but,
along
the
the
time
the
the
the
work
is
being
done
in
etsy.
The
purpose
is
somehow
generic,
so
dealing
now
or
changing
the
city
scope
to
multi-access
technology
computing
in
the
sense
also
considering
fixed
accesses,
fiber
axis
and
so
on
so
far.
G
So
what
was
the
original
motivation
of
a
cmec?
The
the
the
idea
was
to
put
into
value
the
some
ideas
that
some
purposes
that
could
be
achievable
through
the
project's
computing
capabilities
in
the
networks:
first,
getting
contextual
information,
so
the
information
that
could
be
valuable
for
applications
and
services
to
exploit,
let's
say
to
provide
better
mechanisms
for
delivering
services.
G
Contextual
information
such
as
the
where
the
user
is
located,
what
kind
of
access
technology
is
using,
even
which
kind
of
terminal,
and
so
on
so
far,
in
order
to
improve
the
delivery
of
the
of
the
service
also
to
have
advantage
of
proximity
with
proximity.
We
can
enhance
several
parameters,
several
service
level
objectives
such
as
latency
or
throughput,
and
so
and
finally
having
a
a
number
of
facilities
able
to
host
applications
in
in
the
border.
G
So
this
was
more
or
less
the
main
motivations
of
the
work
that
we
started
in
nc
along
the
time.
Er
etsy
mech
has
been
working
also
integrating
this
solution
with
other
solutions
like
hcnp,
3dp
problem
forum
and
so
on
so
far
so
trying
to
get
really
a
consistent
approach
to
deploying
this
edge
computing
facilities
across
the
network.
So
what
you
can
see
there
are
the
number
of
participants
in
hcmec
and
as
also
all
the
specs
are
available,
and
you
can
check
with
more
detail.
G
So
next,
please
well
the
idea
or
max
somehow
changes
a
little
bit
the
story
on
how
the
applications
could
be
developed.
So
the
idea
is
to
transition
from
the
original
client
server
approach
to
something
that
involves
another
additional
party
right.
So
we
have
a
client,
the
the
edge
part
and
the
remote
part
at
the
time
of
delivering
or
creating
the
applications.
G
This
could
have
some
implications
in
terms
of
how
to
handle
the
traffic,
because
we
I
mean
there
could
be
different
procedures,
different
processes
for
stealing
the
traffic
across
the
different
parties
into
the
network,
so
key
is
used
to
to
understand
how
influence
make
on
the
on
the
routing
and
so
are
somehow
listed
in
the
slides,
so
how
to
discover
the
edge
by
the
users
how
the
users
connect
to
the
different
edges.
So
what
is
the
determination
of
anchoring
one
user
in
in
one
edge
or
one
another
or
directing
one
user
to
an
another?
G
What
could
be
the
capabilities
of
the
different
edges
in
terms
of
service
to
be
provided
to
the
user?
How
to
guarantee
keos
and
what
happens
when
the
user
is
is
moving?
No,
this
is
probably
well.
This
is
immediately
takes
or
heads
to
the
idea
of
mobile,
but
also
we
can
consider
seasonal
periods
where
maybe
the
fixed
user
is
moving
from
different
locations
and
and
probably
could
be
of
interest
to
take
with
him
or
her
the
contents
alone.
He
is
located
in
different
in
different
places.
G
Next,
please
so,
typically
in
etsy
in
etsy
mech
the
developers,
the
developments
take
periods
of
three
years,
so
in
the
case
of
fcmac,
we
are
now
in
the
third
year
in
the
second
and
the
third.
There
have
been
interesting
developments
that
somehow
highlighted
the
integration
with
ie
network
so
how
to
make
a
mecco
existing
with
what
is
being
developed
in
3dbp.
We
will
comment
a
little
bit
later
on
on
this,
and
now
in
the
third
period
we
are
er
of
the
etsy.
G
Mac
is
working
in
in
some
other
capabilities
that
could
be
the
deployment
of
mech
facilities
in
in
enterprise
environments.
So
this
is
linked
with
the
idea
of
the
the
vertical
industries,
and
so
so
how
to
leverage
on
those
facilities
that
could
be
in
different
analysis,
domains
in
the
private
one
and
the
public.
G
One,
and
also
it's
not
highlighted
here,
but
also
make
is
working
on
federation,
so
how
to
integrate
resources
from
different
providers,
and
this
could
also
have
implications
in
in
terms
of
routing
the
traffic,
because
we
are
moving
from
the
single
domain
environment
to
the
multi
multiminister.
The
domain
environment
with
could
have
some
implications
on
on
that.
So
next,
please
so
here
in
in
this
picture,
you
can
see
how
mac
is
being
proposed
to
be
integrated
with
a
3dpp
network
with
3pp
core
the
5g
core.
G
Essentially,
so
the
idea
will
be
to
the
initially
to
deploy
mech
capabilities
after
the
upf
so
and
from
the
point
of
view
of
how
to
handle
the
traffic
there
could
be
different
ways
of
doing
that.
One
way
could
be,
when
I
say
far
way
is
just
to
to
make
all
the
traffic
pass
to
through
the
mac
environment,
for
whatever
processing
for
the
applications
and
so
on,
but
also
there
could
be
another.
G
I
mean
the
purpose
we
could
either
pass
for
processing
the
information
and
then
sending
the
the
process,
information
to
the
to
the
next
hop
to
the
next
environment,
and
so,
as
we
saw
in
the
in
the
picture
for
the
applications,
but
also
other
ways
of
handling
could
be
to
implement
local
breakouts,
for
whatever
application
that
could
be
local.
G
So
somehow
splitting
the
traffic
and
doing
some
processing
in
the
traffic
for
improving
the
performance
and
and
essentially
to
provide
the
the
with
the
user
with
the
service,
and
he
is
he
or
she
is
demanding
next,
please.
G
So.
This
is
an
example
of
that,
so
making
this
case
would
behave
as
an
application
function
within
the
fidget
core
architecture
and
according
to
certain
characteristics
that
could
be
identified
for
the
for
the
end
user.
For
the
term,
the
user
equipment,
the
terminal
in
3d
terminology
make,
could
drive
some
actions
in
in
the
traffic,
so
probably
could
divert
part
of
the
traffic
local
for
implementing
this
local
breakout
or
giving
leaving
pass
the
the
other
traffic
to
the
next
data
network
exit.
G
Let's
say
that
would
be
the
one
that
is
on
on
the
right.
So
at
the
end,
the
point
here
is
that
through
mech,
we
will
have
the
possibility
of
influencing
the
way
in
the
in
which
the
traffic
is
still
across
the
network,
so
that
there
is
room,
though
there
could
be
room
for
optimizing
things
from
the
routing
perspective
in
this
respect,
and
next
please
we'll
comment
a
little
bit
now
about
the
another
similar
initiative
which
has
been
developing
atut.
G
G
So
the
this
project
was
here.
Well,
it's
an
ongoing
project,
so
it's
been
developing
sg13
was
the
study
group
13
in
within
atut,
and
then
the
main
purpose
of
the
the
motivation
of
this
project
was
more
thinking
on
advanced
services,
with
the
idea
of
the
transition
to
the
beyonce,
and
so
so
far.
G
Also
considering
fast
routing,
are
routine
processes
or
situations
in
order
to
reach
the
different
environments,
different
services
and
applications
being
run
on
on
these
environments.
They
were
protocol,
programmability
and
flexible,
addressing
for
dynamically
deploy
and
instantiate
services.
On
top
of
these
capabilities.
G
So
crc
is
considering
three
three
or
three
volcanoes
and
at
this
stage
one
about
the
requirements,
another
one
about
qos
framework
and
another
one
about
management,
an
orchestration,
so
the
scenarios
or
the
use
cases
that
they
are
considering
for
illustrating
or
for
developing
the
capabilities,
are
low,
latency
and
high
computing
requirements,
service,
consistence
and
huge
scientific
data
applications.
These
are
the
use
cases
where
they
are
elaborated
the
needs
that
are
expected
to
be
documented
in
the
iut
recommendations
later
on
next,
please.
G
So
the
work
is
yet
on
progress,
it's
ongoing,
so
there
is
no
too
much,
let's
say,
mid,
immediate
at
least
available
in
the
documents,
so
they
have
proposed.
G
They
are
proposing
this
kind
of
architecture
that
somehow
is
inspired
on
3dpp
architecture,
which
is
known
as
service
based
architecture,
but
essentially
the
different
components
are
connected
together
through
a
kind
of
service
bus
and
in
this
service
there
are
different
capabilities
for
a
network
control
for
policy
and
and
so
yeah.
So
essentially,
this
is
the
the
point
where,
where
they
are
so
considering,
as
we
mentioned
before,
steam
low
latency
high
compute
high
computing
high
mobility,
dynamic
services
that
could
be
deployed
in
different
parts
of
the
network
and
energy
services
also
as
well.
G
So
next
please
well!
This
is
the
final
one.
This
was
just
simply.
To
summarize,
I
mean
two
initiatives
that
are
being
developed
in
formal
seos.
Just
to
briefly
comment.
There
are
some
other
initiatives,
for
instance,
in
the
linux
foundation,
there
are
some
initiatives
for
for
these
computer
capabilities
to
be
deployed
across
the
network
and
also
in
the
gsma.
G
There
is
another
one
which
is
named
operator
global
platform,
but
at
the
end,
what
the
proposes
is
to
federate
different
environments
across
different
providers
even
connecting
to
hyperscalers
like
amazon,
web
services,
google
cloud,
microsoft,
azure
and
so
on
so
far,
so
essentially
to
create
a
substrate
or
compute
environments
and
and
also
well.
The
objective
of
this
on
the
purpose
of
this
work
that
we
are
trying
to
move
here
would
be
to
optimize
the
the
reaching
the
way
of
reaching
all
these
compute
capabilities
spread
across,
and
with
that,
I
think
that
I'm
done.
H
So
eric
not
mark
so
in
this
mech
and
cnc
work,
are
they
viewing
it
purely
as
a
routing
function
or
are
they
looking
at
like?
How
do
we
enable
end-to-end
security
like
tls
in
this
context,
or
is
it
purely
routing?
Well,.
G
They
regarding
routing
they
are
considering
capabilities
of
influence
since
the
stealing
of
the
traffic,
so
the
possibility
of
inserting
rules
and
so
for
this
local
breakout
and
and
so
so
far
they
are
in
the
point
of
view
of
security.
I
don't
have
an
answer
right
now.
I
I
I
don't
have
an
answer
for
you
right
now,
so
maybe
I
can
comment
offline
in
the
mailing
list.
J
G
F
Oh,
my
name
is
george
garagianes.
I
need
to
put
my
name
in
the
queue
you
know
why
just
a
question
luis.
So
you
mentioned
that
the
work
has
done
and
mecca
and
cnc
might
have
an
influence
on
on
what
will
be
done
in
the
idea,
for
example,
for
this
war.
What
do
you
think?
What
what
what
kind
of
requests
do
you
expect
to
bring
to
atf
from
mac
and
cnc.
G
Well,
I
I
think
that
the
purpose
would
be
to
optimize
the
the
the
wage
and
we
could
reach
the
applications
and
services
that
are
running
on
the
different
compute
environments,
so
to
have
ways
of
identifying
or
optimizing
the
the
proper
edge
in
such
a
way
that
the
delivery
is
the
most
efficient
as
possible.
No
behind
efficiency,
we
could
have
better
experience
higher
qos,
higher
throughput
and
so
necessarily
to
optimize
the
delivery
of
the
services.
A
A
quick
reminder:
please
state
your
name
when
you
come
to
the
mic,
I
will
have
two
more
questions
from
jung
and
ac.
D
Okay,
I'm
actually
holding
a
place
in
line
for
jim
yutaro,
who
put
a
question
in
the
chat.
His
question
is:
please
provide
an
example
of
a
huge
scientific
application.
G
Okay,
it
would
be,
for
instance,
the
the
transfer
of
datasets
that
could
be
distributed
across
the
network,
so
yeah
the
interchangeable
data
sets
between
scientific
research,
centers
and
so
on.
So
that
could
be
a
way
I
mean
there
will
be
ways
of
optimizing.
The
transfer
of
of
these
data
that
could
have
very
high
volumes
and
optimizing
that
transfer
essentially.
A
K
G
So
that
cnc
is
done
in
developing
idut
and
mac
is
developing.
L
B
B
If
you
look
at
this
work,
if
I
what
you're
presenting
here,
what
is
is,
is
this:
behind
the
upf,
you
try
to
optimize
where
the
resources
are
coming
from,
that
the
application
is
going
to
provide
to
that
edge
location,
or
are
you
trying
to
influence
the
3gbp
mechanism
to
make
a
better
selection
to
select
that
upf
location?
So
my
so
that's
my
question
regarding
this:
bring
new
routing
work
to
itf
and
which
location
or
at
which
part
of
the
architecture.
Are
you
trying
to
do
that
traditional,
let's
say
logic.
G
G
The
idea
would
be
the
mac,
identifies
the
the
specific
user
and
then
instruct
the
epp
to
to
somehow
implement,
for
instance,
local
breakout,
and
so
there
will
be
other
ways
of
integrating
on
not
only
this
one,
but
this
let's
say
pure
your
mech
architecture,
so
there
could
be
influence
on
on
that
in
the
routing
for
sure.
But
as
I
see
at
least
this
will
be,
let's
say
out
of
a
scope
of
icf.
B
E
Yes,
please,
the
mec
and
cnc
by
itot
is
not
the
entire
work
done
by
ietf.
So
the
later
presenters
will
show
what
the
work
they
expect
to
bring
to
ieta.
A
L
Okay,
this
is
from
you
from
child
mobile
and
I'd
like
to
introduce
in
same
use
case
here
and
play
next.
L
So
the
cam
aims
at
computing
and
network
results,
optimization
by
steering
traffic
to
appropriate
computer
cells,
considering
not
only
using
metrics
but
also
complete
resource
metric
and
the
service
videos
which
were
shown
in
the
trail
slides,
and
we
see
it
as
a
problem.
Demand
for
item
routing
and
the
similarity
of
things
is
that
both
of
them
considered
computing
and
network
resource
joint
optimization.
L
But
since
we
focus
on
the
vision
scenarios,
requirement,
architecture
and
network
function,
enhancement
for
future
mobile
code
network
and
telecom,
fixed
mobile
satellite
convergent
network,
but
not
for
internet
or
routing
area
that
I
can
be
concerned.
So
I
think
this
question
can
answer
this
until
you
can
answer
the
question
before
next
time.
Please.
L
So,
at
this
stage,
so
many
student
nodes
with
servers
installed,
which
can
be
appropriate
to
visit
and
at
computing
infrastructure,
while
more
diverse
computing
results
need
to
be
provided
compared
to
the
cdnos
original
content.
Cache
results,
moreover,
more
edge
computing
nodes
will
be
set
up
in
an
on
demand
manner.
Here's
a
typical
analysis
result
with
the
product
number
shows
that
we
may
need
to
build
so
many
edge
nodes
around
a
company,
aggregation
network
access,
aggregation
network
and
on-site,
so
not
only
for
channel
mobile.
L
So
why
does
the
infrastructure
develop
so
fast?
It
is
because
of
user
demand.
Here
are
two
points.
First
is
that
users
want
the
best
user
experience
such
as
low
latency
and
high
reliability
between
the
emerging
new
services,
such
as
high
definition,
video
vr
and
live
broadcast,
and
so
on.
Another
is
that
the
stable
service
experience
will
move
into
different
areas.
As
we
know,
the
mobile
network
developed
better,
providing
more
convenience
for
users
to
use
lightweight
clients
to
get
more
application
service.
L
However,
only
the
new
infrastructure
themselves
might
not
only
enough
to
fully
guarantees
quality
of
service.
More
need
to
be
done.
We
summarize
three
points.
First,
one
is
provide
the
functional
equivalency
by
developing
instance
for
the
same
service
across
each
side
for
for
better
availability
and
then
keeps
load
balance
for
both
static
and
dynamic
scenarios
by
both
up
layer
and
network
layer
solutions.
L
It
can
be
seen
as
traffic
is
delivered
to
optimal
exercise
according
to
the
more
starters,
for
example,
aid
computing
and
the
best
should
be
defined
for
each
service.
Sometimes
it
might
be
the
appropriate
one,
but
not
real
best,
one.
Okay,
next
guys,
please.
L
So
don't
we
have
the
thoughts
to
meet
the
mind,
but
the
fact
is
that
edge
computing
has
advantage
of
clothes,
but
sometimes
the
clothes
is
not
the
best,
which
is
the
point
of
the
use
case.
The
problem,
some
reason
is
that
closest
sites
may
not
have
enough
results
because
the
load
may
dynamically
change
and
another
is
that
a
closed
site
may
not
have
related
results
and
heterogeneous
hardware
in
different
side
stats.
L
And
here
we
use
three
tries
to
describe
why
the
closest
is
not
the
best.
In
some
cases.
This
is
a
common
way
to
develop
the
edge
computing.
There
will
be
at
the
center
for
metro
area,
which
has
high
computing
results
and
will
provide
service
to
more
users
at
the
working
time,
because
more
of
its
building
will
be
in
the
metro
area,
and
there
will
also
be
some
remote
exercise
which
has
limited
computer
sales
and
provide
the
service
to
the
us
cloud
to
them.
L
L
So
we
can
see
that
in
this
slice
it
will
move
from
metal
area
to
the
remote
side
in
the
right
side.
It
is
because
it's
a
weakened
or
some
of
the
daily
nights,
whatever
it's
not
some
working
time.
L
Well,
you
will
move
to
the
remote
area
that
is
close
to
their
house
for
some
weekend
events,
so
there
will
be
more
service
requests
as
remote,
but
with
limited
results
committing
results,
while
the
rich
community
resource
might
not
have
been
used
with
lesser
in
the
metro
area,
so
it
needs
to
steal
some
traffic
to
metro
center,
because
only
for
one
or
two
days
you
can
justify
adding
servers
to
the
remote
site.
L
L
L
First
is
the
sensor
sampling
delay,
which
can
be
seen
as
1.5
milliseconds,
and
the
second
is
display
refresh
delay,
which
can
be
seen
as
7.9
milliseconds
and
both
of
them
too
are
in
the
client
and
not
with
control
by
the
network
or
the
cloud
or
server
and
the
frame
rendering
computing
delay.
Waste
gpu
could
be
optimized
timing
to
5.5
millisecond
millisecond,
so
the
network
delay
budget
could
be
calculated
as
5.1
millisecond,
so
we
can
get.
L
The
first
point
is
that
badges
for
computing
delay
and
network
delay
are
almost
equivalent,
and
there
is
a
picture
in
the
right
side
shows
that
the
client
could
the
client
request
could
go
to
the
headstart
one
two
or
three
to
to
get
service
and
the
corresponding
network
delay
and
complete
delay
show
here.
L
So
we
can
say
that
if
we
choose
as
sides
according
to
the
competing
load,
only
it
will
be
as
the
one
and
the
total
delay
will
be
22.4
milliseconds
and
if
we
choose
edge
side
according
to
network,
only
it
will
be
the
inverters
at
2
and
the
total
delay
will
be
23.4
milliseconds.
L
But
if
we
consider
both
it
will
choose
slot
three,
the
total
will
less
than
20
milliseconds.
Okay,
the
total
delay.
I
include
the
delay
in
the
client,
showed
it
is
9.4
milliseconds,
so
we
can
get.
The
second
point
is
that
it
can't
meet
the
total
delay
requirement
for
of
finds
the
best
choice
according
to
either
the
network
or
computing
resource
letters.
L
The
final
conclusion
of
this
two
point
is
that
it
is
required
to
dynamically
steer
traffic
to
the
appropriate
edge
to
meet
the
end-to-end
delay
requirement,
considering
both
the
network
and
the
computer
company
resource
status
and
what's
more
company
resource
has
a
big
difference
in
different
edge
sites,
and
the
clothes
may
may
be
good
for
literacy,
but
lacks
gpu
support
and
should
therefore
not
be
chosen,
which
gives
more
factors
to
considering
the
computer
sounds
the
next
place.
L
And
not
only
for
the
and
vr
more
applications
such
as
intelligent
transport
includes
connected
car
and
video
recognition
at
the
inter
section.
We
also
have
the
requirement
of
joint
simulation,
because
both
of
them
require
low,
latency
and
high
bandwidth
and
high
computing
results.
L
E
A
We
have
three
more
slides,
maybe
I'll
just
show
the
slides
and
see
people
can
read
off
the
screen.
J
L
Yeah
yeah!
Yes,
yes,
so
I
lost
my
network.
Yes,
I
I
don't
know
where
I
worked,
so
this
slides
just
to
show
that
there
are
more
applications
require
the
joint
optimization
of
the
computer
network
source
and
the
mobility
will
bring
a
higher
challenge
of
them.
L
L
Okay,
so
I
can't
see
the
screen.
L
L
The
consideration
before
this
time
before
this
slide
yeah
yeah
yes
here
is
the
second
consideration
is
that
those
apps
require
both
low
literacy
and
high
or
specific
convenient
results.
Have
the
almost
equivalent
budgets
for
community
and
network
delay
and
the
load
of
the
network
and
the
edge
side
may
change
dynamically
and
rapidly.
So
when
steering
traffic,
the
real-time
network
and
the
convenient
resource
data
should
be
considered
at
the
same
time,
in
a
fast,
effective
way.
L
Here
we
summarize
two
preliminary
conclusions
of
its
birthday:
the
traffic
needs
to
be
steered
among
different
edge
sites
and
when
steering
traffic
is
real-time
network
and
computing
results,
data
should
be
considered
at
the
same
time
in
an
effective
way.
Okay,
that's
all.
Thank
you.
A
L
Okay,
a
good
question:
community
resource
includes
the
load,
chip,
category
and
cache.
It
is
different
from
the
network
jobs
which
can
be
measured
by
bandwidth
and
delay.
So
it
is
more
complex
to
model
measure
and
represent
the
computing
results
that
are
in
a
higher
variable
way
at
this
stage.
So
maybe
the
computer
source
is
a
new
work
item.
This
information
model
yeah.
N
L
B
When,
where
madrix
nokia,
I
have
a
question,
so
I
understand
the
context
right
so,
but
so
when
we
say
we
want
to
steve
the
traffic
to
the
proper
edge
right.
What
do
so
do
we
say
we
have
to
select
a
upf
based
on
that
application,
or
do
we
go
beyond
that
upf?
B
So
it's
related
to
my
first
question
that
I
asked
during
the
session,
and
I
also
believe
that
probably
you
want
to
do
the
steering
per
user
or
per
let's
say
I
the
end
point
based
on
application
right
is
that
so
it's
not
just
an
anchoring
to
the
upf,
but
you
actually
want
to
do
it
with
application
awareness
that
you
say,
okay
ar
goes
here.
Something
else
goes
on
this
side
because
it
has
better
resources
is.
Is
that
my
proper
understanding
of
the
context
that
you're
trying
to
address
here.
L
L
The
application
based
solution
may
be
not
as
quick
as
a
roading
or
the
other
rebased
solution,
and
I
think
those
those
details
could
be
discussed
in
the
solutions.
Part
yeah.
B
The
second
way
to
solve
it-
and
I
believe,
3gbp-
has
a
lot
of
mechanism
to
even
do
puri
and
per
application
already
today
to
say:
okay,
you
can
select
based
on
this
application,
ue
a
dedicated
upf,
so
that's
already
in
place.
So
if
that's
true,
then
I
think
this
problem
will
have
to
be
solved
or
is
being
solved
in
3dp
space
and
not
in
itf.
B
And
then
third
is
you
could
say?
Okay,
you
do
something
you
have
to
do
something
beyond
the
3vp
level
and
actually
do
it
in
the
in
the
routing
system,
but
you
have
to
if
the
context
is
really
3gbp
or
5g
and
stuff
like
that,
we'll
have
to
have
that
interworking
with
3dp.
Somehow
so
that's
the
reason,
I'm
trying
to
understand
a
little
bit
the
context
of
what
and
at
what
level
and
which
parameters
we
want
to
use
in
order
to
do
this.
O
I
think
that
my
question
is
quite
specific.
I
wanted
to
ask
if
computing
resource
is
measurable
and
thus
it's
a
its
own
service
level
objective
or
it's
rather
unmeasurable,
and
then
it's
a
service
level
expectation.
Thank
you.
L
Yes,
as
a
previous
question,
the
computer
south
measurement
is
not
so
so
so
being
a
really
either
way
directly.
We
may
just
consider
two
ways:
one
is
the
user
index
to
represent
its
level
for
the
load
or
to
bring
more
informations
for
small,
that
for
more
specific
using
yeah
and
that
that
will
bring
specific
new
work
and
may
also
can
be
discussed
in
the
solution.
Part.
L
So
yes,
it
could
be
no
matter
what
I
think,
but
it's
just
to
to
choose,
to
marry
it
in
in
different
ways.
Yeah.
L
Yes,
it's
also
me
to
introduce
this
part
and
based
on
the
use
case
discussed
before
we
can
get
the
preliminary
conclusion
of
joint
optimization
of
network
and
computing
results.
So
here
is
the
gap
analysis
requirements.
Next,
please.
L
We
analyzed
it
from
three
aspects:
one
is
the
early
binding
and
the
health
check
and
the
load
balance
over
the
dns
and
for
the
elevating
it
is
for
the
price
reserve
iphone
and
then
steel
traffic,
and
these
always
use
the
dns
entry
cached
as
at
client.
The
information
are
owed
and
not
be
so.
Real-Time
and
often
the
reserver
and
load
balancer
are
separate
entities
which
incurs
even
more
signaling
overhead
by
needing
to
first
resolve
and
then
redirect
to
the
load
balance
for
final
decision.
L
It
is
originally
intended
for
the
control,
but
not
a
database
speed
in
words.
It
may
be
a
little
slower
to
your
usage
and
the
for
the
health
check
is
a
frequent
basis,
so
each
when
you're
over
the
but
the
limited
computing
results
edge
will
change
rapidly,
while
more
frequent
health
check
is
prohibited
in
cost
and
the
load
balance
over
dns
really
focus
on
edge
server.
L
L
Second
is
for
the
load
balance
load
balancer.
We
will
show
two
common
ways
of
the
load.
Balancer
one
is
a
single
load
balancer
in
a
single
site
for
all
server
instance.
It
can
be
found
in
the
edge
side,
two
there's
a
load
balancer
and
worked
on
as
edge
one
two
and
three.
L
So
the
advantage
is
that
it
is
easy
to
deploy
to
be
deployed
and
about
the
comps
includes
that
it
can.
It
has
a
single
point
of
failure
at
the
load
balance
and
the
network
path
from
the
load
band
to
a
server
instance
for
other
site
might
might
not
be
optimal,
for
example,
as
read
and
dirty
the
path
showed
there's
an
additional
path
from
sli
two
two
as
one
and
another
way
is
that
each
site
has
its
own
load
balance.
L
The
prompt
is
is
a
deployment,
but
there's
no
load
balance
among
multiple
sides,
because
it
has
only
worked
for
itself,
so
it
can't
meet
the
use
case
of
all
the
other
requirements
and
okay.
Next,
please.
L
Here's
the
summary
of
gaba
analysis.
First,
one
is
the
dynamicity
of
relations,
the
existing
solution,
excavates
limitations
in
providing
dynamic
instance
affinity.
L
In
package
transmission
under
complexity
and
accuracy,
the
existing
solution
require
careful
planning
for
the
placement
of
necessary
contribution
function
in
relation
to
the
resulting
data
plan.
Traffic,
which
is
difficult
and
may
lead
to
the
in
accuracy
of
the
scheduling
and
next
is
metric
exposed
and
using
existing
solutions
lacked
necessary
information
to
make
the
right
decision
on
the
selection
of
the
suitable
service
instance,
due
to
the
limited
semantic
or
due
to
information
not
being
exposed.
L
Finally,
the
security
it
can
business
as
the
existing
solution
may
expose
countries,
whereas
data
plan
to
the
possibility
of
ddos
attack
on
the
resolution
system,
as
well
as
service
instance.
Okay.
Next,
please.
L
We
just
to
consider
from
the
use
case
and
the
gap
analysis
here,
the
main
goals
to
meet
this
use
case.
One
is
open:
choice,
consider
to
access
the
location
of
multi-computing
results
and
dynamically
considering
to
collect
the
appropriate
computing
results
dynamically
and
then
using
the
multi-metric
to
select
the
edge.
The
next
place.
L
So
here
is
the
potential
requirements
in
this
format
is
wrong,
but
doesn't
matter
first
one
support
multi-access
to
the
available
outside
dynamically,
based
on
any
cast
or
other
potential
solutions,
and
second,
is
support.
Considering
they're
using
both
network
and
computing
magic,
as
discussed
before
the
computing
semantic
model,
information
may
be
a
potential
work
item
for
it,
and
the
read
control
signaling
of
matrix
should
be
considered
to
not
to
bring
so
much
pressure
for
the
network
and
the
third
one
is
support.
L
Effective
complement
source
reference
retention,
reference
representation
and
the
capacitive
by
using
the
single
index,
for
example,
for
the
cpu
threshold
or
multi
information
for
specific
purpose
and
the
fourth
one
is
supports
the
interface
between
network
and
computing
components,
maybe
by
centralized
or
and
decentralized
advertising
and
signaling
and
next
is
supports.
The
session
continuity
and
service
continuity
to
realize
the
functional
equivalency
in
different
areas
and
the
last
one
may
support
the
management
of
network
and
company
results.
A
Okay,
we
have
two
minutes
for
quick
clarifying
questions
for
open
discussions.
We
have
30
minutes
arranged
at
the
end.
How.
P
Hi,
I
am
not
sure
if
there
are
any
other
potential,
existing
solutions
need
to
be
analyzed,
so
I
hope
to
include
more
analysis
such
as
sfc.
I
think
we
should
be
more
persuasive
for
the
world
thanks.
L
L
L
Okay,
thank
you.
As
I
know,
at
this
stage,
the
load
balance
doesn't
really
know
all
the
network
leaders.
It
is
a
possible
way.
Most
details
of
the
solution
could
be
discussed
in
the
following
part.
Thank
you.
A
Okay,
well
we'll
move
on
to
the
next
topic:
two
potential
solutions,
but
then
we'll
come
back
to
open
discussion.
M
M
Okay,
so
this
presentation
is
about
moving
load,
balancer
based
solution
for
the
requirements
discussed
for
computer,
where
it
works.
M
M
M
So
the
definition
of
load
balancer
right
as
we
see
from
a
google
search.
It
says
it's
a
traffic
cop
sitting
in
front
of
your
server.
So
basically
it
takes
the
client
requests
and
then
distributes
them
to
the
available
servers
that
are
serving
that
particular
application
based
on
the
the
workload
on
the
servers
so,
and
it
also
keeps
track
of
the
health
of
the
servers
and
then
the
ones
which
are
online,
it
makes
sure
it
sends
the
requests
to
the
ones
which
are
online
next
slide.
Please.
M
Support
these
session
persistence,
which
means
a
certain
client
is,
is
a
is
a
allocated,
a
certain
server
to
to
use
that
application
will
continue
to
use
that
same
server.
For
that
particular
client,
and
this
is
called
a
session
persistence
in
load,
balancer
terminology,
another
use
case
for
this
session.
Persistence
is
also
when
there
is
some
sort
of
client-based
information,
caching
that
is
used,
and
that
also
you
know
using
the
same
server
you
know
allows
for
using
the
using
that
cache
related
to
that
client,
which
means
you
know
better
performance
next
slide.
M
M
Yeah,
so
the
modern
load
balancers
also
support
service
discovery,
health
checking
and
you
know,
intelligent
load,
balancing
algorithm,
so
service
discovery
is
sort
of
figuring
out
where
the
services
are
their
location
and
their
names
and
addresses
and
they're
capable
of
finding
out
the
load
on
them
and
then
route
the
traffic
to
the
ones
which
are
less
loaded
and
the
load
balancing
algorithms
also
keep
track
of.
I
mean
they're
capable
of
you
know,
considering
some
network
information
such
as
keeping
traffic
within
zones
and
so
on.
M
M
So
the
types
of
load,
balancers
layer,
4
and
layer,
7,
load,
balancer,
so
layer,
4,
load,
balancers,
look
at
mostly
tcp
and
udp
headers
and
the
load
balance
based
on
that
information,
which
is
so
this
l4
load
balancers,
are
highly
scalable
and
better
performance.
But
there
may
be
some
efficiency
problems
due
to
application.
Layer,
invisibility
and
l7
load
balancer
solve
that
problem.
M
They
look
into
the
application
layer
and
then
they
provide
the
load
balancing
they're
the
most
efficient,
but
they
may
be
slower
due
to
because
they
have
to
observe
the
packets
up
until
the
l7
application
level
information.
M
So
there
is
also
a
hybrid
model
that
many
solutions
provide
were
in
a
mix
of
l4
and
l7
load
balancers,
so
there's
a
first
level
of
l4
load,
balancer
and
then,
which
sort
of
load
balance
is
based
on
tcp,
odp
headers,
and
then
there
is
a
l7
load
balancer
that
will
further
load
balance
based
on
application
layer
information.
So
this
is
also
sort
of
existing
solution
that
has
been
that's
available
with
the
load
balancers
next
slide.
Please.
M
So
we
all
know
that
you
know
network
layer
can
provide
paths
with
strict
sla
guarantees
right,
so
it's
responsible
for
delivering
strict
sla.
So
if
you,
if
you
request
the
network
to
you,
know,
provide
paths
from
point
a
to
point
b,
which
are
bandwidth
guaranteed
or
you
know,
latency
bounded,
for
example,
a
path
with
you
know
within
five
millisecond
delay,
and
then
that
should
guarantee
one
gig
bandwidth.
M
It
can
either
through
distributed
traffic
engineering
or
centralized
controller-based
traffic
engineering,
that
the
solutions
are
available
in
the
network
layer.
That
can
provide
paths
from
one
point
to
another
point
with
strict
sla
guarantees
and
additional
guarantees
would
also
be
served
with
network
slicing
solutions
such
as
reserved
queuing
and
packet
prioritization,
based
on
the
packet
types
next
slide.
Please.
M
Yeah,
so
so
this
the
slide
is
showing
you
know
from
point
a
to
point
b.
You
know
paths
satisfying
strict,
slas
and
network
controller.
Can
you
know
it's
capable
of
providing
this
strict
guarantees
pass
from
point
a
to
point
b
and
then,
if,
if
a
path
is
not
available,
for
example,
here
from
the
top
node
to
dc3
the
there
is
no
there,
there
is
no
path
that
satisfies
you
know
five
millisecond
delay
and
one
gig
bandwidth.
So
there
is
no
path
there.
M
M
So,
knowing
what
the
load
balancer
can
do
and
what
the
network
controllers
can
do
sort
of
analysis
of
whether
the
load
balancer
can
be
used
for
the
requirements
posed
by
a
can,
so
the
load
balancers
and
the
servers
are
at
the
are
at
the
overlay.
M
They
are
not
really
in
the
routing
layer,
they
are,
they
are
in
the
application
layer
and
then
they
turn
the
server
address
to
an
individual
on
any
cast
address
the
individual
site
address
and
that
is
and
when
they,
when
the
load
balancers
do
that
they
use
the
information
of
the
the
compute
resource
availability,
the
the
load
on
the
compute
resource
and
when,
when
they
do
that,
they
also
consider
the
network
information
that,
whether
the
required
sla
path,
the
required
delay
measurement
and
the
required
bandwidth
constraints
are
met
and
a
path
to
the
particular
site
is
available.
M
M
Exchanging
information
can
ensure
that
the
the
right
site
is
chosen
for
the
for
choosing
the
location
for
serving
the
application
next
slide.
Please.
M
So
this
is
this:
is
the
load
balancer
architecture.
Sort
of
this
is
where
the
future
of
the
load
balancing
architecture
is
the
load.
Balancers
are
deployed
on
the
ingress,
it's
very
close
to
the
upf.
M
There
are
various
options
that
the
that
that
are
available
for
the
load
balancer,
it
could
be
virtualized
or
it
could
be
dedicated
device
or
it
could
be.
As
part
of
you
know,
software
that
resides
along
with
the
along
with
the
upf
and
then
there's
a
global.
So
some
sort
of
global
input
is
also
helpful
in
making
the
most
appropriate
decision,
while
choosing
the
sites
where
the
traffic
is
diverted.
M
So
most
of
this
is
all
information
on
wire
line
networks,
but
it
does,
it
looks
like
most
of
the
things
are
really
common.
Even
for
the
wireless
or
the
mobile
use
cases,
the
additional
information
additional
requirements
would
be
to
take
care
of
ue
movement
in
wireless
environments,
from
from
one
location
to
another
location
and
using
some
sort
of
storing
session
information.
M
Looks
like
load
balancers
would
be
able
to
support
those
use
cases
as
well.
M
Next
slide,
please
yeah
so
so.
In
summary,
the
can
requirements
involve
selecting
service
instance
based
on
current
state
of
the
service
instance
and
modern
day
load
balancers.
They
perform
sophisticated
functions
of
choosing
the
right
service
instance,
so
the
load
balancers
run
in
overlay
and
they
abstract
the
applications
from
the
network
and
so
a
combination
of
intelligent
load
balancers
along
with
smart
network
layer,
so
they
they
can
probably
able
to
solve
the
can
requirements.
M
Yeah,
that's
that's.
That
concludes
my
talk.
M
R
R
Hi
dirk
cross-
and
why
are
we
looking
at
the
previous
item,
saw
there?
They
were
not
numbered,
I'm
not
entirely
sure
which
slide
I'm
going
needing
to
go
back
to
you
mentioned
address
rewriting.
R
R
I
reply
to
his
question
in
the
chat:
is:
isn't
there
a
lot
to
be
talked
about
if
you
have
load
balancers
now
at
every
upf,
and
I
want
to
mix
and
match
load
balancers,
I
need
to
understand
how
the
control
traffic
from
the
load
balancer
controller
to
the
load.
Balancers
look
like
in
order
to
have
consistency
of
sessions.
R
If
I
want
to
mix
vendor
and
vendor
b
on
upf
1
and
upf
3,
I
need
to
make
sure
that
the
the
entire
signaling
of
the
metrics,
the
metrics
that
are
being
exchanged,
are
somewhat
understood
and
be
treated
the
same
way.
Otherwise,
I'm
moving
as
a
ue
from
euph1
to
upf3
and
suddenly
my
behavior
changes,
because
the
load
balancing
decision
is
entirely
different.
R
So
it's
the
question
about:
where
do
you
see
you
know?
What's
the
difference
really
between
the
l3
load
balancer
and
what
I
think
the
diecast
architecture
later
in
the
next
presentation
called
the
d
router?
And
how
do
you
see
the
coordination
between
the
controller
and
the
load
balancers
being
done.
S
S
There
are
a
lot
of
documents
that
are
either
rfcs
or
very
close
to
become
rfc.
Please
take
a
look
at
them.
Number
two,
I
think
using
an
abusing
5g
terminology
is
really
dangerous
thing
to
do
here.
It
goes
actually
to
vm
questions.
A
lot
of
this
issues
have
been
solved
in
5g
packet,
core
and
control
plane.
It's
not
a
routing
problem,
so
we
are
trying
to
get
a
user
to
write,
location
upf
could
be,
there
could
be
something
that
could
be
bng.
I
think
if
we
abstract
terminology,
the
solution
would
become
much
more
applicable.
S
It's
definitely
not
a
5g
problem.
So
abusing
terminology
is
the
best
thing
here.
Number
three
would
be
looking
around
every
cdn
and
hyperscaler,
including
my
employer.
We
are
deploying
hundreds
of
each
location
every
month
they
are
not
connected
by
black
magic.
Let's
take
a
look
at
what's
been
done.
We
don't
want
to
take
re-version
2
and
go
to
adr,
and
you
know
offer
them
to
do
inter-domain
routing
with
it.
There's
existing
technology
that
really
works.
A
Thank
you,
okay,
well,
very
quick.
One
then
we'll
move
on
to
the
next
one.
T
Dan
boganovich,
so
what
hyperscalers
can
do
with
the
existing
technologies,
their
application
and
network
requirements
are
aligning,
but
for
the
individual
enterprises
and
the
network
providers
they
they
might
not
be
aligning,
and
for
that
reason
some
of
the
existing
technology
that
that
we
are
using
today
cannot
answer
some
of
the
problems
that
the
enterprises
are
asking.
D
Hello,
yes,
while
we're
queuing
up
the
presentation,
let
me
just
remind
everyone
to
state
your
name
at
the
mic.
I
know
that
if
you're
using
the
you
know
the
the
mic
line
management
app,
it
does
pop
your
name
up
in
the
display,
but
that
doesn't
help
with
the
recording.
So
please
do
remember
to
state
your
name
by
the
way.
My
name
is
john.
J
Okay,
okay,
hello
guys,
my
name
is
jeremy
from
huawei,
so
the
topic
today
is
about
diecast
architecture.
So
what
is
the
cost?
The
cost
is
dynamic,
any
cost
which
has
been
proposed
in
the
draft,
the
diecast
architecture.
So
next,
please.
J
So
the
cost
is
a
potential
solution
for
can
right,
so
this
is
overview
of
the
diecast
service
model.
Suppose
so,
first
of
all,
you
can
see
that
we
will
have
multiple
service
instance
to
be
installed
and
running
in
multiple
edge
sites
right
and
this
kind
of
services
will
be
presented
by
several
anycast
ip
address,
and
this
kind
of
fip
anycast
ip
address
will
be
distributed
to
the
client
by
the
dns
system
right.
But
the
key
problem
is
that
the
key
question
is
that
how
the
client
can
find
the
best
server.
J
The
service
instance
among
this
kind
of
service
instance
running
in
different
edge
sites.
That
is
the
problem
that
we
can.
We
will
try
to
solve
in
in
can
in
diecast
right.
So
we
define
some
elements,
for
example,
d,
router
dma,
something
like
this,
and
I
I
would
introduce
the
detailed
information
in
the
preview
in
the
following
slides.
J
So
maybe
we
can
go
to
the
next
slide.
J
Thank
you.
Next,
okay,
we
define
some
new
terms
service.
It's
really
easy
to
understand.
It's
yeah,
it's
kind
of
functionality
to
provide
some
yeah
service
right.
So
the
service
instance
is
a
running
environment
that
makes
the
functionality
of
the
service
available
right
and
one
service
can
have
multiple
instances
running
in
different
edge
sites
in
at
a
different
network
locations
right.
J
We
also
define
some
new
elements
called
the
router
dinette
diecast
route
router,
it
is
a
node
support,
site
cost
functionality
and
we
have
we
also
defined
the
mma,
the
custometric
agent.
It's
an
agent
to
collect
information,
the
collective
I'll
say
the
computing
a
metric
and
send
a
metric
among
the
ma
and
the
routers,
but
it
does.
It
will
not
perform
the
forwarding
decision
right.
J
So
in
order
to
identify
the
service,
we
proposed
two
terms
right
here:
diecast
service
identifier,
which
is
dc
standing
for
right.
It
is
an
anycast
address,
address,
identifying
a
service,
intercost
and
all
service
instances
running
the
same
service
are
identified
by
the
same
dc,
and
in
order
to
identify
the
service
instance,
we
would
propose
a
new
term
called
diecast
dividing
identifier,
the
feed,
and
it
is
a
unicost
and
just
so
so
that
the
the
the
client
can
reach
to
the
specific
service
instance
by
this
address
right.
So
next
slide.
J
J
So
I
will
go
into
the
details
of
the
distributed
modes
today
and
let's
take
up,
take
a
look
of
the
distributed
mode
at
the
left
side
in
distributed
mode
that
that
constant
node
will
be
aware
of
the
status
of
computing
resources,
and
this
kind
of
metric
of
the
computing
rate
source
would
be
distributed
among
the
die-cast
capable
nodes
so
that
the
optimal
forwarding
or
routing
decision
can
be
made
by
the
ingress
d
router
right
in
the
centralized
mode.
J
J
Okay,
this
is
the
distributed
mode
of
the
die-cast.
As
you
can
see
from
this
figure.
We
have
multiple
elements
like
the
routers
running
in
the
network,
this
kind
of
the
router
we
provide
the
cost
functionality
and
in
order
to
collect
the
computing
resources
status
and
distribute
them
among
the
routers,
we
define
the
metric
agent
right
dma,
so
you
can
see
that
the
dma
will
collect
the
information
and
send
to
the
router
and
and
dma
can
be,
can
be
a
logical
function
running
on
a
specific
router
right.
J
So
by
this
way
we
can
collect
the
information
and
distribute
it
distribute
the
information
among
the
router.
So
so
the
router
can
have
this
kind
of
computing
resource
status.
So
combining
the
network
metric
and
the
computing
resource
metric,
the
ingress
director
can
and
do
the
best,
a
decision
of
voting
yep.
Next,
please.
J
Okay,
so
this
is
the
format
of
the
service
metric
information.
In
order
to
support
that
cost,
we
need
to
define
a
metric
associated
to
the
p
seat
and
the
deep
beat
right
and-
and
it
should
be
calculated-
I
mean
the
metric
should
be
calculated
and
advertised,
but
regarding
the
matrix
right
here,
it
can
be
a
single
dimension
value,
for
example
a
number
right.
It
can
be
a
a
multiple
dimension
value.
It
can
be
a
tube
of
with
multiple
numbers.
J
J
J
Okay,
this
is
why
I
saw
how
to
distribute
the
magic
among
the
d
routers.
For
example,
we
have
bc
one
the
beat
21
and
matrix,
for
example,
one
right
it's
gotta,
be
this
kind
of
information
will
be
distributed
yep
next,
please.
J
Okay,
if
you
take
a
look
up
this
take
a
slide,
you
will
see
that
it
is
really
similar
to
the
to
it.
It
is
really
similar
to
be
to
the
solution
provided
by
the
previous
speech
right,
the
load
balancer
in
the
ingress
router
right,
actually,
they're,
really
they're,
really
the
same.
The
similar
one
right
we
providing.
J
J
We
have
pc1
and
dc2,
which
identified
two
services
right,
but
for
each
services
we
have
two
respective
two
service
instance:
21
31,
22,
32,
right
and
for-
and
we
have
a
follow
voting
entry
here
for
each
I'll
say
die
cast
voting
entry.
We
will
have
the
metric
information,
so
we
have
two
types
of
metric
here:
network
and
service
instance
metric.
So
for
the
network
metric
it
would,
it
would
have.
It
would
be
something
like
a
normal
contrast
or
something
delay.
J
Something
like
this
right
for
the
service
instance.
It
can
be,
it
can
be
a
a
measurable
memorable
index
or
house.
A
number
right
relates
to
the
load,
the
resource
availability-
something
like
this.
So
this
is
a
really
easy
case
for
illustration
right.
So
in
this
case
a
service
demand
with
the
destination
address
as
dc2
will
be
sent
from
the
client
to
the
d-router
and
at
the
d-round
one
right.
J
J
We
provide
the
best
for
this
request
right,
so
you
update
the
destination
address
of
the
packets
from
like
dc2
to
db22,
which
is
the
unicode
a
unicast
address
right.
So
by
using
this
address
you,
the
package
can
be
forward
from
the
router
1
to
the
service
instance
db22
right,
so,
which
is
very
easy
to
be
understood
right
next.
J
Okay,
so
this
slide
talks
about
some
potential
work,
as
you
can
see
that
it's
really
easy
to
understand.
We
need
some
flow
affinity,
work
on
diecast
to
provide
a
better
performance
right,
but
this
still,
this
topic
is:
it's
still
opened
since
the
last
sign
meeting.
We
have
collect
multiple
solutions
for
flow
affinity
and
one
of
them.
One
of
them
is
this
one.
J
We
provide
a
binding
table
in
the
ingress
d
router,
as
you
can
see
that
we
have
a
flow
identifier,
for
example,
it
would
be
made
by
a
classic
file
to
value
the
source
id.
That's
that's
ip
vertical
something
like
this
right
and
if
we
associate
it,
it
will
be
associated
with
a
d.
I
d,
a
b,
I
d,
for
example
the
b
I
d
22
right
here
and
we
will
have
a
timer
as
well.
J
So
by
looking
at
the
feet,
the
package
will
be
forwarding
to
the
bid
22
directly
without
mapping
again.
So
this
is
some
potential
work
in
the
future
and
you
are
really
welcome
to
to
provide
any
comments
and
and
question
of
this
part
yep.
So
next.
Q
Okay,
so
I
do
have
some
comments
for
the
open
debate,
but
before
I
make
them,
I
want
to
clarify
the
proposal
I
just
heard
in
the
d.
Router
was
the
assumption
that
it
encapsulates
packets
or
not,
because
earlier
proposals
did
not
talk
about
encapsulating.
They
were
very
clear
that
the
address
routing
in
the
infrastructure
had
to
be
related
to
the
selection.
Q
J
Okay,
joe,
actually
we're
not
proposing
some
specific
encapsulation
solution
here.
It
can
be
an
easy
adjustable
writing.
But
if
we
have
a
tunnel
from
the
director
one
to
the
beat
id
22
right,
it
can
be
a
okay
encapsulation.
Q
J
Q
If
we
move
to
an
overlay
architecture
and
stop
trying
to
inject
this
stuff
into
the
underlay
routing,
we
could
then
have
a
discussion
about
whether
there
was
even
a
problem
to
be
solved.
We
have
multiple
overlay
techniques
that
already
can
tunnel,
where
edge
devices,
which
you
want
to
call
a
d
router,
but
it's
just
the
same
as
a
lot
of
other
edge
application
handling
can
handle
selecting
a
tunnel
to
deliver
something.
Q
So
I'm
left
really
confused
because
your
architecture
either
asks
us
to
move
stuff
into
routing
for
no
good
reason
or
it
isn't
really
doing
anything
new,
and
we
don't
need
a
new
working
group
to
make
use
of
tunnels
in
the
routing
space.
Maybe
we
need
a
transporter
application
one
or
maybe
we
need
something
that
works
on,
how
applications
tell
the
network
edge
what
they
want.
P
Q
E
Okay,
next
one
do.
I
I
think
this
layer,
three
load
balance,
is
about
the
solution
for
the
load
balance
mapping
any
cost
to
a
unicast
address
can
provide
a
good
way
to
select
the
best
service
instance.
This
map,
this
mapping,
can
be
done
in
year.
Three
in
this
situation,
the
whole
network
can
work
as
a
distributed
load
balancer
this
layer,
three
load
balance
there
can
both
obtain
the
network
information
under
the
computing
information,
and
it
is
closer
to
the
user.
So
I
think
it's
valuable
to
explore
this
layer,
3
load
balancer.
U
My
comments
and
questions
is
for
diecast
draft,
I'm
a
little
confused
by
the
part
of
the
metrics
of
servicing
instance
in
the
in
the
slides.
For
me,
when
it
comes
to
the
time
span
from
the
beginning
of
this,
the
service
in
the
service
instances
to
the
end
of
the
service,
it
could
run
from
minutes
seconds
and
milliseconds.
U
So
when
the
time
span
is
about
milliseconds,
so
the
matrix
has
to
be
updated
and
so
high
frequent
to
the
remote
the
d
router,
where
what
whatever,
where
all
the
forwarding
decision
has
to
be
made,
and
actually
I
I
don't
think
it
is
possible
for
the
for
the
node
for
for
the
router
make.
This
makes
this
decision,
because
it's
too
fast
when
you
meet.
U
So
that's
that's
my
my
comments
and
questions
for
for
for
this
yep.
Thank.
J
You
comment
received,
I
think
you
are
proposing
some
very
important
things
of
the
updating.
Yes,
we
will
agree
with
that,
so
it
would
be
update.
It
will
not
be
updating
per
like
milliseconds,
something
like
this
it
would.
It
will
be
a
nightmare
right,
so
it's
so
we
need
to
set
some
bar
or
something
like
this,
and
it
will
be
a
completely
an
engineer,
a
task.
So
we
believe
that
we
will
have
a
way
out
right,
we'll
find
a
good
way
out.
Yeah.
S
Jeff
jeff
concerto.
The
question
is
about
metric
normalization.
So,
as
joel
said,
if
you
want
to
change
destination,
you
need
to
impose
additional
header.
You
cannot
change
destination
addresses
right,
so
we
are
going
back
to
past
computation
and
even
looking
traditional
psep
and
very
limited
number
of
input.
Computation
is
complex
at
large
scale.
It
takes
minutes
if
not
hours.
S
Here
you
are
talking
about
an
abundant
number
of
inputs
where
it's
not
really
comparable.
Even
you
cannot
compare
cpu
law
to
gpu
load
to
memory
utilization
to
something
else
right.
So
how
are
you
going
to
compute
a
path
for
a
particular
destination
with
about
number
of
metrics
that
are
not
comparable?
I
really
don't
don't
see
it
reasonable.
J
Yeah,
thank
you
so
so
good
question.
Well,
I
think
we
can
discuss
more
on
the
mailing
list
because
it's
a
big
question
yeah.
How
about
that
jeff?
H
H
But
if,
if
we're
doing
some,
you
know
computation
well,
that
could
be.
You
know
have
this
gpu
that
I
found
somewhere
in
my
network
near
me.
This
gpu
will
compute
on
this
stuff
for
an
hour
right
and
then
I
want
to
get
the
results.
So
is
that
what
we
mean
by
compute?
Or
is
it
something
that
finishes
in
10
milliseconds,
because
the
actual
notion
of
continuity
or
affinity
are
completely
different
and
then
it's
not
only
over
time?
It's
also
over
space
because
things
move.
H
V
V
In
fact,
I
do
some
clarification
work
in
the
chat
box.
First
point
that
the
people
talk
about
the
the
before
work.
I
mentioned:
that's
the
kind
work
to
some
extent
that
is
similar
the
implicit
thing
behind
the
before
and
the
kind
work
is
the
operator
can
hold
both
the
network
status
and
the
application
load
status.
V
So
I
think
this
is
the
scope,
because
maybe
in
the
other
scenarios,
the
maybe
these
providers,
application
providers
only
know
the
application
of
the
status
and
cannot
know
the
network
status,
but
maybe
in
other
scenarios
the
network
service
provider
only
know
the
network
status
and
cannot
know
the
applications
later
so
I
mean
that's
the
scope
of
the
kind
work.
Is
that
the
that's?
The
scenario
that
the
operator
can
hold
both
the
network
status
application
status?
V
This
I
want
to
clarify
the
first,
the
second
one
I
think.
Now
this
is
the
user
case
and
also
this
the
problem
statement
has
been
clarified
and
presented,
but
we
know
that
the
solution
is
open,
but
for
the
load
balancer,
I'm
not
I'm,
not,
I'm
not
certain
about
it,
because
I'm
not
sure
the
load.
Balancer
is
always
co-located
with
the
upif
or
not.
If
you
co-located
with
this
the
upif.
V
Maybe
this
belong
to
the
3gpp
worker
and
not
the
ietf
work.
This
is
the
first
one.
I
think
this
we
must
determine
the
second
one.
I
think
that's
for
the
load
balancer.
I
wanted
to
learn
the
network
status.
I
think
maybe
a
little
different,
a
little
challenge
because
that's
in
the
network
there's
the
multiple
this
the
past
the
past
had
different
status.
V
J
Okay,
thank
you.
So,
regarding
yeah,
I
I
have
some
comments
further
like
regarding
the
diecast
solution.
I
have
to
say
that
actually
we
only
provide
the
solution
for
the
for,
for
one
use
case
that
the
operator
have
the
network
and
and
computing
resources
together,
so
they
can
use
that
we
will
not
adjust
the
request
to
another
edge
size
which
will
be
operated
by
another
operator
so
that
that
would
be
impossible.
Yeah.
E
Okay,
thank
you,
so
just
want
to
emphasize
that
those
are
the
questions
we
have
in
mind
when
we're
doing
the
open
discussion.
The
use
cases
are
presented
by
pull,
I
so
focus
on
those
use
cases
and
not
random
one.
Okay.
So
robin
do
you
have
more
things
to
say.
Oh
you
finished.
W
W
So
I
think
here
the
dyncast
provides
a
potential
solution
for
khan,
because
I
think
that
for
the
operators
we
can
hold
both
the
network
information
and
also
the
service
load,
and
so
then
the
network
can
learn
the
application
load
and
do
more
optimization
and
and
also
on
the
other
hand
here,
some
people
propose
a
service
continuity
problem.
In
fact,
I
have
noticed
that
in
the
then
cast
solution
here
is
the
flow
affinity
related
issues.
W
From
my
understanding,
I
think
that
dyncast
will
select
the
service
node
when
the
session
starts,
and
this
service
notes
will
won't
change
until
the
session
ends.
So
I
think,
from
this
point
of
view,
we
can
solve
this
risk,
continue
the
problem
from
a
certain
extent,
and
so
in
conclusion,
I
think
that,
as
for
the
landcaster
solution,
we
can
go
it
further
and
to
explore
more.
Yes,
thanks.
X
Yeah
caitanya
here
and
I'm
kind
of
maybe
adding
on
to
jeff
jeff's
comments
here.
So
the
key
takeaway
for
me
here
was
this
notion
of
network
metric,
and
you
know
there
was
congestion.
There
was
computing
load
and
you
know
probably
other
more
stuff.
I
think
that
part
is
somewhat
interesting,
but
I
did
not
see
it
being
clearly
specified
as
to
what
it
is.
What
kind
of
metrics
are
there?
How
are
they
injected?
X
How
are
they
measured?
How
would
they
be
used
in
routing
protocols?
Let's
say
I
think
that
part
is
probably
within
the
routing
area
and
perhaps
something
to
look
at
so
yes,.
J
X
X
Yeah,
for
the
rest,
I'm
not
very
sure
if
this
is
a
routing
problem
or
not.
J
Thank
you
kevin.
So
one
comment
right
here
like
yeah,
you
are
right.
The
magic
has
hasn't
been
described
specifically
right
now,
and
this
is
a
big
work
that
we
should
focus
on
the
future.
So
I
think
that
this
kind
of
what
would
be
what
would
belong
to
the
rtc
area
for
sure
and
yes,
I'll
go
with
you
yeah.
T
Yes,
hi
dian
bogdanovic,
so
throughout
this
itf
we
are
seeing
some
common
themes
that
people
are
bringing
to
the
groups
and
trying
to
solve,
and
it's
pointing
out
that
we
have
some
underlying
fundamental
problems
that
we
have
to
look
into
it,
and
we
have
to
think
and
start
having
an
open
discussion
about
those.
So
yesterday
there
was
a
problem
with
how
to
set
how
to
route
in
the
satellites
we
are
hearing
that
hey.
We
would
like
to
add
more
than
just
where
to
route
that
we
want
to
decide.
T
Also,
why
are
we
routing
and
what
are
we
routing?
And
yesterday
there
was
a
good
presentation
in
the
rounding
working
group
that
addresses
this
problems
that
we
are
hearing
and
the
use
cases
that
we
are
seeing
in
different
working
groups
and
in
different
drafts
that
addresses
fundamentally
that.
T
So
we
also
have
a
issue
that
there's
more
and
more
mobility
being
brought
into
the
mobility
problems
are
being
brought
and
as
the
3gpp
somewhat
owns
the
mobility
architecture,
they
are
trying
to
push
the
3gpp
restrictions
that
are
very
much
telco
orientated
and
you
know
that
are
done
with
the
telco
architecture
in
mind
and
push
it
out
into
the
rp
world.
That's
another
thing
that
we
have
to
look
and
see
how
that
how
that
mobility
problem
can
be
solved
and
not
trying
to
continue
with
some
patch
solutions.
T
I
think
it's
about
time
for
us
as
a
community
to
look
into
the
fundamental
problem
and
trying
to
see
what
can
be
done
there.
There
are
some
bits
and
pieces
of
this
work,
but
we
need
a
place,
and
this
is
my
ask
to
the
isg
first
to
the
routing
area
directors
and
then
to
the
isg
a
place
where
we
can
have
that
discussion.
E
I
agree
I
agree,
so
thank
you.
Thank
you,
so
we
don't
have
much
time
left.
We
only
have
like
five
minutes
left.
10
minutes
left.
So
let's
go
quickly,
focus
on
this
work.
Thank
you,
then,
we'll
bring
up
other
bigger
problems
we
need
to
address
or
I
yes,
you
need
to
address
so
next.
One
great
right.
O
Erickson,
so
it
seems
to
me
that
this
is
we're
discussing
is
a
new
or
presented
as
a
new
performance
metric.
So
I
think
that
it
might
be
something
that
itf
can
work
on
if
it's
different
from,
for
example,
a
latency
introduced
by
computing.
O
I
don't
know
if
we
are
in
the
position
to
do
computing
capacity
measurements
or
how
to
guarantee
that
using
idf
work,
but
I'm
not
sure
that
if
this
is
about
performance
metric
that
the
routing
area
is
the
right
place,
I
would
point
to
ip
performance
measurement
working
group
that
works
on
various
performance
metrics
and
the
measurement
methods.
O
Active
and
hybrid
as
well,
so
if
there
is
anything
to
do
with
a
new
performance
metric
that
expresses
or
reflects
computing
resource,
then
it
might
be
idp
and
working
group.
Thank
you.
K
Ac
limit
system
systems
I
just
want-
I
mean
I,
I
agree
totally
with
both
what
joel
and
jeff
said,
and
my
comments
would
be
the
same.
The
the
dynacast
architecture
is
as
such.
It's
very
wishy-washy.
It
just
kind
of
throws
together.
Oh
yeah.
We
we
have
these
new
metrics.
I
think
I'm
worried
people
are
going
to
start
working
on
all
these
details.
K
In
the
end,
there's
only
a
single
routing
decision,
it's
got
it,
you
know,
injecting
all
you
know,
12
a
dozen
different
metrics
doesn't
really
do
you
any
any
good,
because
you
gotta
know
all
everybody's
gonna,
normalize
them
and
figure
them
out
and
it
and
it's
it's
with
the
load
balancer.
You
already
have
a
solution
and
an
architecture
I
think
mixing
these
things
up
is
misguided
and
I'm
worried
people
are
going
to
just
start
start
working
on
details
without
agreeing
on
a
good,
a
good
without
having
a
good
starting
point.
K
You
look
at
it
also
with
this.
With
this
anycast
address,
if
you're
not
tunneling,
everybody's
got
to
be
using
the
same
algorithm
in
the
same
metrics,
otherwise
you're
going
to
have
loops,
I
mean
I
there.
That's
that's.
That's
a
well-known
fact
in
routing,
so
let's
not,
let's
not
have
I
don't
want
to
see
a
bunch
of
a
bunch
of
drafts
with
the
details
of
this
before
we
agree
on
our
architecture,
thanks.
Y
Yeah,
actually,
I
I'd
like
to
provide
some
of
the
preliminary
hints
on
how
could
how
the
magic
could
be
defined,
because
we
do
some.
We
have
done
some
preliminary
implementations
on
this
area,
so
the
whole
work
is
trying
to
use
the
joint
optimization
for
the
computing
metrics
and
also
the
number
matrix.
So
basically,
we
can
have
the
normalized
metrics
for
the
computing
resources
and
normalize
the
metrics
for
the
networking
resources.
Y
Y
So
basically,
when,
when
the
id
system
trying
to
deploy
the
services,
it
knows
that
this
one
I
use,
maybe
choose
a
few
course
too
for
it.
So
roughly
there
is,
there
is
a
normalized
metric
that
say
the
total
capacity
of
the
number
of
in
terms
of
number
of
sessions
can
be
supported.
Y
So
that's
the
that's
how
the
any
class
ip
addresses
or
the
destination
ip
addresses,
coupled
with
the
normalize
the
computing
metrics.
Of
course,
the
computing
resources
and
the
network
resources
should
be
calculated
in
conjunction
ultimately,
but
here
it's
just
saying
a
simple
example
how
it
could
be
done.
So
I
just
want
to
provide
this
kind
of
information.
E
C
Okay
sure,
well,
I
have
just
you
know
some
very
simple
comments
regarding
the
the
the
5gs
as
the
example
here,
I'll
use.
Kids,
I
think
it's
it's
a
very
significant
thing,
so
you
know
even
right
now
in
the
fgs
system
it
starts
to
introduce
some
projects.
The
seeds
are
wait
to
even
expose
the
upf
information
throughout
the
external
world
and
then
likely
for,
like
trust,
trusted
af
those
things,
so
that
will
be
some
paths
to
introduce
to
to
bring
the
nano
metric
our
computing
metric.
C
Those
are
the
concept
to
integrate
with
the
the
networking
part
and
the
five
parts.
So
I
think
this
work
is
is
important.
So
what
if
I
left
the
first
for
the
use
case
mentioned
in
the
presentation
just
thank
you.
E
D
Okay,
so
thank
you
linda
thanks
everybody
for
for
your
participation.
I
think
that
this
was
a
really
good
and
really
well-engaged
conversation,
so
the
when
the
proponents
proposed
above
they
they
said
their
goal
was
to
increase
awareness
and
increase
participation,
and
I
think
that
from
that
perspective,
this
off
has
already
been
a
success.
D
I
believe
the
proponents
have
got
gotten
a
ton
of
valuable
input
and
a
lot
of
homework
to
take
home
and
work
on,
and
it
should
keep
them
busy
for
you
know
some
weeks
or
months
to
come
so,
and
you
know
it
would
be
really
nice
to
not
see
the
energy
that
we've
you
know
built
up
today,
dissipate
I'd
like
to
remind
everybody,
there's
a
mailing
list
to
continue
the
conversation
on,
and
I
hope
you
will
use
it
and
in
particular
take
your
your
comments
there
that
you've,
you
know
pasted
in
the
chat
room
which
I've
only
vaguely
kept
up
with
because,
as
usual,
there's
too
much
information
streaming
past
between
chat
and,
what's
being
said,
the
mic
and
so
on
to
follow
it
all.
D
So
so,
please
do
follow
up
on
the
on
the
mailing
list
with
that,
with
regard
to
sort
of
the
the
questions
that
we've
got
here
on
the
slide,
you
know,
I
don't
think
I
saw
any
disagreement
that
the
use
cases
are
important.
I
heard
a
lot
of
reference
to
existing
solutions.
I
don't
I
don't.
I
did
not
hear
a
consensus
that
you
simply
need
to
apply
existing
technology
and
we're
done.
Although
I
did
hear
a
great
deal
of
you
know,
have
you
looked
at
x
enough?
D
You
know,
for
example,
jeff
mentioned
alto
and
there
were
other
things
and
I'm
so
I
I
think
that
there's
certainly
a
question
about
whether
existing
technologies
have
already
been
paid
enough
attention
to
regarding
whether
the
proposed
architecture
would
be
harmful
in
any
way.
I
heard,
especially
from
kind
of
traditional
routing
area
people.
D
Many
people
saying
you
know
what,
wherever
this
work
is
done,
do
not
put
it
in
the
underlying
I.
There
was
a
loud
plurality
of
apparent
opinion
in
that
direction
and
regarding
where
the
work
should
continue,
there
doesn't
seem
to
be
a
consensus.
There
does
seem
to
be
a
consensus.
There
are
various
pieces
and
that
at
most,
only
some
of
them
are
part
of
robbing.
D
I
think
that
we're
going
to
have
to
continue
to
chase
that
question.
One
other
interesting
thing
that
interesting
to
me
anyway,
was
the
sort
of
in
the
discussion
of
load
balancers
versus
you
know.
Something
else
was
the
point
that
dirk
brought
up
about.
Well,
the
the
the
d
router
in
and
dyncast
is
really
you
can
think
of
it
in,
in
terms
of
it
being
a
load
balancer,
and
there
were
some
other
comments
in
the
in
the
chat
about
well.
D
Well,
gosh
is
really
what
we're
looking
for
is
standardization
of
load
balancers,
because
right
now,
they're,
you
know
all
proprietary
secret
sauce,
so
yeah
we're
at
the
top
of
the
hour.
So
I
I
don't
want
to
make
us
run
over,
but
you
know
just
to
to
reiterate.
I
think
that
you
know
we've
we've
had
a
good
buff
today.
We've
surfaced
a
lot
of
valuable
points.
D
I
think
that
this
discussion
can
and
will
continue
and
it's
that
there
is
not
to
me
any
clear
next
venue
for
for
the
the
work
to
land
in
other
than
on
the
mailing
list.
But
you
know
you
have
a
lot
of
people's
attention.
Let's
keep
it
thank
you,
and
if
the
chairs
would
like
to
say
anything
to
wrap
up,
please
go
ahead.
E
Thank
you.
Thank
you.
Everybody
just
make
sure
go
back
to
the
notes.
If
you
ask
questions,
make
sure
your
questions
are
captured
and
your
names
are
captured
correctly.
Thank
you.
J
E
E
Okay:
okay,
I
find
the
the
chat
window.
Has
many
interesting
questions?