►
From YouTube: IETF 115 Host Speaker Series
Description
IETF 115: Host Speaker Series
This talk organized by Cisco, the IETF 115 meeting host, presented by Kevin McMenamy and Fredrik Pihl will discuss Project Callisto and extra-terrestrial collaboration with WebRTC and AV1.
B
C
You're
ready
to
go
all
right
after
a
two-minute
launch
today
we
are
ready
to
rock
and
roll.
Let
me
welcome
Kevin,
who
will
be
here
with
us
in
the
room
to
represent
project
Callisto,
an
extraterrestrial
collaboration
with
webrtc
and
everyone
if
you're
in
this
room-
and
you
are
part
of
the
RTC
web
working
group-
raise
your
hand
okay,
so
several
people,
cool,
other
codec
and
other
friendly
people
here,
take
it
away.
D
Thanks
Ted,
it's
an
honor
for
me
to
be
here
speaking
to
all
of
you,
esteemed
internet
engineering
task
force
members.
Thank
you
for
having
me
I'm,
Kevin
I'm,
with
Cisco
and
I,
wanted
to
share
with
you
a
brief
version
of
the
story
of
how
we
came
to
participate
in
Project
Callisto.
We
partnered
with
Lockheed
Martin,
together
with
other
companies
in
the
industry
like
Amazon,
with
their
Alexa
Technologies
and
Apple
iPad
Technologies,
to
try
to
solve
the
problem
of
real-time
or
as
real
time
as
we
can
do.
D
Given
the
latencies
communications
in
deep
space
exploration
and
I
want
to
give
credit
to
the
the
folks
who
worked
on
this
implementation
and
and
built
the
original
version
of
this
presentation.
D
I'm
just
here
to
share
it
with
you
all.
So
what
is
Project
Callisto?
It's
a
technology
package
on
board
the
Orion
spacecraft
to
test
the
viability
of
commercial
Technologies
in
space
exploration
and
I
mentioned
those
Technologies
already
as
WebEx
by
Cisco
Amazon
Alexa,
an
Apple
iPad
tablet
mounted
inside
the
Orion
spacecraft
to
allow
future
manned
or
crude.
Excuse
me
crude
space
missions
to
collaborate
in
near
Real
Time
with
folks
on
the
ground
and
Orion
is
the
deep
space
exploration,
spacecraft
built
by
Lockheed,
Martin
and
Artemis.
D
Thank
you.
You
can
scan
that
QR
code
up
at
the
Top
If.
You
want
to
learn
more
or
go
to
that
URL
and
sing.
Tiny
font
up
in
the
top
right
hand,
corner
Lockheed,
martin.com
and
find
the
Callisto
page
there.
D
So
Artemis
one
is
the
first
of
the
Artemis
missions
that
will
launch
from
Kennedy
Space
Center
now
targeting
November
16th
launch
window
at
1am
Eastern
time.
That's
6
a.m,
here
in
London
for
those
of
us
that
live
here
and
it
will
circumnavigate
the
moon
and
return
to
Earth
as
a
test
flight
of
the
system.
D
We're
all
super
excited
and
crossing
our
fingers
that
that
November
16th
date
holds
they've,
been
monitoring.
The
hurricane
weather
and
there's
they've
had
a
few
resets
on
the
on
the
calendar,
but
November
16th
is
the
the
current
scheduled
launch
date.
So
we'll
see
if
that
happens,
and
the
challenge
that
we
were
asked
to
solve
is
real-time
video
HD
collaboration
in
space,
and
there
was,
as
you
can
imagine,
a
few
technical
challenges.
D
The
first
one
is
It's
a
Long
Way
to
the
Moon.
You
know
our
industry
standard
is
300
milliseconds
round
trip
for
real-time
collaboration
with
project
Callisto.
We
were
asked
to
engineer
it
to
up
to
20
seconds
or
20
000
milliseconds
round
trip,
which
is
awfully
a
long
time
in
the
world
of
Video
Communications.
As
those
of
you
who
work
on
video
now
and
if
you
want
to
see
what
to
20
000
milliseconds
looks
like
oh,
this
is
not
a
build
slide.
D
I
lost
my
builds
because
I
converted
it
to
PDF,
never
mind
the
PowerPoint
version.
That
line
draws
out
over
20
seconds,
so
you
can
kind
of
experience
it.
So,
obviously
the
questions
are:
are
we
going
to
have
delays
in
the
various
protocols,
not
only
at
the
video
bitstream
layer
in
our
av1
codec,
but
also
in
protocols?
You
know,
setting
up
signaling,
you
know
connecting
the
Call
Etc
and
how
effective
are
our
error,
resilience
and
Recovery
techniques
going
to
be
in
the
face
of
these
huge
latencies
and
packet
loss
and
Jitter.
C
F
A
D
D
Awesome
thanks
for
the
demo
tape,
so
the
second
problem
is
the
satellite
network
has
pacquelos
and
Jitter
and
very
low
bit
rates.
So
it's
an
asynchronous
network
of
200
kilobits
per
second
Upstream
from
the
ground
to
Orion
and
one
megabit
down
from
Orion
to
the
ground
and
of
that
200
200
milliseconds.
160K
of
that
is
video,
and
40K
of
that
is
audio.
So
you
know
that's
a
challenge.
D
How
do
we
get
high
definition,
30
frames
per
second
video
to
work
at
160
kilobits
per
second
and
there's
also
pack
of
loss
and
Jitter
to
contend
with.
D
The
third
problem
is
the
first
Artemis
Mission
Artemis
one
is
an
uncrewed
flight.
There's
no
humans
on
the
thing
to
to
touch
the
iPad
and
launch
the
app
and
place
the
call
and
all
of
that
stuff.
So
how
in
the
world
are
we
gonna
solve
that?
You
know?
How
are
we
going
to
launch
the
application
how's,
the
application
going
to
know
when
to
join
the
call,
how
you
know
what
happens
if
the
app
crashes,
what
happens
if
a
pop-up
gets
in
the
way
of
the
application,
and
so
there's
nobody
to
touch
it
and
there's?
D
No,
you
know
we
couldn't
use
Audible
commands
either,
because
our
system
is
the
thing
that
is
receiving
audio
from
the
ground
playing
it
out
of
our
speaker
on
the
iPad
tablet.
And
then
you
know
we
could
try
to
Loop
that
back
into
the
microphone
I
suppose
but
like
what
what
we
have,
we
would
have
to
have
a
call
up
in
order
to
have
that
pathway
established
so
like
so
we
had
to
come
up
with
clever
Technologies
for
that
our
clever
techniques
to
solve
that
problem.
D
And
surprisingly
it
just
worked
like
out
of
the
box.
We
were
shocked
except
h.264,
video,
Codec
kind
of
fell
apart
at
these
low
bit
rates
and
looked
pretty
bad.
So
we
introduced
ab1
into
the
project
which
we
had
wanted
to
do,
but
we
tested
it
with
264
before
moving
forward
with
ab1.
So
we
said
yep
okay,
confirmed
264
is
not
going
to
work.
Let's,
let's
go
ahead
and
Implement
everyone
so.
D
Just
a
few
words
about
ab1
I'm
sure
many
of
you
in
the
room
are
aware
of
it,
and
many
of
you
I'm
sure
have
contributed
to
it.
But
av1
is
a
new
video
Codec
designed
by
the
alliance
for
open
media,
which
was
sponsored
by
several
companies,
but
three
of
those
were
Google
Cisco
and
Mozilla,
contributing
our
very
our
respective
video
Codec
Technologies
into
this
effort
to
produce
a
codec
that
would
be
as
good
or
better
as
h.265,
while
being
open
source
and
royalty
free,
and
you
can
read
more
about
it
at
the.
D
You
can
read
more
about
our
contribution
to
that
at
that
link
down
at
the
bottom
and
I
just
want
to
shout
out
Jonathan
Rosenberg
in
the
room
here
for
initiating
and
leading
this
effort
when,
when
you're
at
Cisco.
Thank
you
Jonathan.
D
So
we
have
our
own
implementation
of
the
Cisco
av1
codec
and
that's
primarily
because
the
aom
version
is
not
super
optimized
for
real
time
low,
latency,
bi-directional
video
conferencing,
real-time
communication.
So
we
have
our
own
implementation
of
the
av1
encoder
developed
by
colleagues
in
UK,
Norway
and
China,
and
highly
optimized
for
real-time
latency,
sensitive
Communications
for
decoding.
We
use
the
open
source
davit
decoder
from
video
land
that
was
adopted
by
aom,
and
that
gives
us
these.
D
Our
implementation
gives
us
about
30
percent,
better
compression
or
30
less
bit
rate
at
the
same
complexity
or
the
same
CPU
consumption
as
our
reference
open,
h.264
codec
that
we
publish
and
lots
of
other
companies
use
and
we've
integrated
av1
into
our
WebEx
product
in
WebEx
meetings,
and
we
also
implemented
it
into
this
custom
version
of
the
WebEx
app
and
the
server
the
conferencing
server
that
we
created
for
this
project
callista.
D
So
what
were
the
challenges
and
solutions
at
the
video
Codec
bitstream
level
that
we
had
to
solve?
So?
First
of
all,
it
had
to
have
great
quality,
or
at
least
good
quality
at
extremely
low
bit
rate
160
kilobits
per
second,
and
to
solve
that
av1
itself
has
inherently
in
it.
You
know
lots
of
tools
and
techniques
for
getting
pretty
decent
quality
at
very
low
bit
rates
like
quad
tree
super
blocks
and
context,
adaptive,
entropy
coding
and
various
other
techniques.
D
So
we
employed
all
of
those
Second
Challenge
is
the
huge
latency
up
to
10
seconds
in
each
Direction,
so
obviously,
packet
loss
requesting
a
keyframe
is
not
going
to
work,
so
we
just
simply
ignore
keyframe
requests
and
we
generate
keyframes
periodically
and
then
there's
packet
loss,
so
we
implemented
normally
and
I
really
enjoyed
Luke's
presentation
earlier
about
quick
over
media.
So
if
you
guys
attended
that
this
morning,
then
you
remember
the
IPP
or
IP
BP.
D
You
know
Gap
structure
that
Luke
talked
about,
so
we
implemented
a
a
broadcast
like
Gap
structure
with
discardable
keyframes
and
to
make
it
more
resilient
to
pacalas
and
then
the
problem
of
keyframes
being
traditionally
very
large
and
subject
to
being
just
corrupted
by
packet
loss
in
in
transit.
We
downscale
have
to
half
the
resolution,
then
encode
and
then
upscale
on
the
other
end
when
we
decode
and-
and
we
also
allow
the
quality
to
kind
of
kind
of
build
up
over
several
frames.
D
If
there's
not
a
lot
of
motion
in
the
video
and
the
server
and
the
app
switch
between
normal
mode
and
this
error,
resilient
mode
described
on
the
right
hand,
side
of
the
slide
based
on
the
frequency
of
IDR
requests.
So
if
we
start
getting
IDR
requests,
we
switch
into
this
resilient
mode
and
then
we
switch
back
out
of
it
to
try
to
increase
quality
when
IDR
requests
are
not
coming
through.
So
we
ignore
them,
but
we
still
get
them
and
we
switch
into
this
mode
when
as
needed.
D
D
So
that's
how
we
solve
the
the
no
interaction
problem
and
I.
Just
wanted
to
close
with
giving
credit
to
my
colleagues
who
contributed
to
this
I
was
the
director
of
engineering
at
the
time
and
Thomas
and
Dimitri
and
Alex
on
our
team
worked
on
this
together
with
many
other
Engineers
at
Cisco,
and
our
colleagues
at
Lockheed
Martin,
so
just
wanted
to
say
thanks
to
all
of
them
for
making
this
reality
and
fingers
crossed
this
thing
Launches
on
November
16th.
Thank
you
for
having
me.
C
The
cues
are
open,
so
if
you
can,
please
use
the
tool,
I
realized.
It
was
a
little
bit
hard
for
people
because
the
because
of
the
URL
problem
on
the
agenda.
C
G
D
Did
yeah
that's
a
good
question.
It
is
you
know,
to
be
honest:
it's
a
bit
of
a
walkie-talkie
kind
of
mode
you.
You
certainly
can't
interrupt
each
other,
like
you
can
on
a
normal
video
conference,
even
in
normal
video
conferencing
interrupting
is
you
know
you
can
accidentally
step
on
each
other
because
of
the
end
latency.
D
Here,
that's
just
exacerbated
right
that
you
really
need
to
kind
of
wait
and
let
them
finish
talking
and
before
you
try
to
speak
over
them,
but
other
than
that
it
works
great
and
the
it's
not
just
video,
but
it's
also
real-time,
whiteboarding
and
and
content
sharing.
I
should
have
mentioned
that
in
the
presentation.
So
imagine
you
know
we
could
draw
together.
It
would
take
a
while
for
what
you've
drawn
to
show
up
for
me
and
what
I've
drawn
to
show
for
you,
but
it
works
conflict.
G
B
I'm
sure
yeah
I
think
that
you
you
choose
the
av1
over
H2
264
is
because
it's
very
efficient
right,
yes
yeah,
but
H2
q65
is
also
very
efficient.
So
what's
the
benefit
between
everyone
and
yeah.
D
Yeah
I
think
from
our
perspective
and
I,
don't
know
that
everyone
in
the
industry
shares
this
perspective
that,
from
our
perspective,
at
Cisco,
the
challenge
with
h.265
is
the
patent.
Okay
challenge
right
so
for
us,
ab1
is,
is
open,
source
and
free
to
use,
and
so
it
makes
sense,
even
if
they're,
roughly
equivalent
technically
thank.
B
F
Yeah
I
can't
I've
tried
a
million
times
and
I
can't
get
on
the
on-site
queue,
so
I
apologize,
Jonathan,
Rosenberg
super
happy
to
see
this.
This
is
awesome,
normally,
there's
a
trade-off
for
video
encoding
between
computational
complexity
and
bandwidth,
and
so
to
get
the
bandwidth
down.
You
know
you
could
do
a
lot
more
work
on
the
encoder
side.
Did
you
also
have
computational
constraints,
but
it's
computational
complexity
leads
to
I,
guess,
weight
or
power,
or
something
that
are
all
considerations
in
the
moon.
So
what
what's
the
compute
environment
here.
D
G
D
G
D
That's
a
great
question
actually
that
I
guess
from
our
perspective,
we
didn't
think
that
it
was
usable
outside
of
this
very
constrained
use
case,
because
you
know
we
basically
are
doing
you
know
we
disabled
srtp,
we
disabled
TLS,
you
know
so
not
the
kind
of
thing
we
would
normally
recommend.
You
know
customers
to
do
so.
It
was
very
like
purpose:
Built,
For,
This,
closed
Network
system,
where
confidentiality
and
and
data
integrity
and
all
that
are
less
of
a
con
concern.
Okay,.
G
And
it
just,
there
was
a
previous
group,
a
boss
that
was
in
where
we
were
talking
about.
You
know
low
power
stuff,
and
you
know
particularly
things
that
were
time
sensitive
as
to
how
much
bandwidth
was
available
and
whatnot
it'd
be
kind
of
interesting.
If
you
were
to
build
up
from
what
you
got
to
with
that
and
minimally
add
back
to
it,
what
you'd
end
up
with
that
could
be
very
low
bandwidth,
but
still
have
that
high
quality
in
order
to
reach
those
kind
of
rural
areas,
yeah.
F
D
G
D
D
I
You
good
to
see
you
thanks.
Can
you
talk
a
little
bit
about
the
the
limitations
in
terms
of
the
features
and
conferencing
like?
Could
you
have
two
people
on
the
ground
joining
a
conference
with
the
moon
like
like
in
terms
of
our
normal
typical
experience
of
video
conferencing
like
what's
available
in
this
context,
versus
what
we
are
accustomed
to
is
terrestrial
users
of
WebEx.
D
I
love
that
you
ask
that
question
because
I
should
have
mentioned
it
on
the
slide,
but
because
I
lost
the
build
I
kind
of
maybe
missed
some
points
so
on
the
ground.
They
have
normal
WebEx
devices
and
like
our
board
and
our
room
systems
and
our
desktop
systems
like
the
The
Desk
Pro,
for
example,
and
they're
they're
all
connected
in
a
conference
with
the
Orion
spacecraft
right.
D
So
they
can
all
have,
you
know,
be
having
a
real-time
meeting
and
the
application
that
I
believe
NASA
wants
to
use
it,
for
is
to
invite
you
know:
students,
kids,
other
people
from
the
public
into
their
Ground
Control
facility
to
experience
this
and
actually
collaborate
with
astronauts.
I
J
J
I
was
curious
because
this
is
definitely
very
interesting
first
use
case
and
looking
into
the
future,
we
probably
will
face
situations
where
we
will
not
have
line
of
sight
right,
because
this
is
the
first
time
to
the
Moon,
the
yeah.
C
J
J
Have
you
thought
or
do
you
have
any
idea
in
this
case
you
showed
that
architecture
showing
the
proxy
how
you
simplify
the
the
signaling
onto
the
space
station,
but
if
ever
we
put
more
relays
or
an
Leo
satellite
network
with
that
also
work
or
have
you
guys
thought
about
that
multi-hop
architecture.
D
That
is
a
really
good
question
and-
and
we
didn't
because
we
didn't
actually
have
a
lot
of
detail
from
NASA
about
the
in
in
trick.
You
know
the
the
inside
details
of
how
their
network
works,
but
it
is
a
satellite
relay
based
Network,
which
is
why
the
latencies
are
up
to
20
seconds,
because
the
average
distance
from
the
Earth
to
the
Moon
is
much
less
than
20
seconds.
D
It's
like
three,
but
they
said
you
need
to
engineer
for
up
to
20
seconds
and
that's
because
it's
going
to
be
bouncing
off
these
satellites
when
it's
like
on
around
the
back
side
of
the
moon,
it
has
to
hit
a
satellite
before
it
comes
to
the
Earth
and
so
forth.
So
but
I
I'm
not
as
familiar
with
the
you
know
the
underlying
details
of
how
that
satellite
network
works
and
what
we
could
do
to
optimize
it
further.
So,
okay.
D
H
K
H
D
D
D
And
I
had
to
get
permission
to
give
this
presentation,
because
even
this
amount
of
detail
is
more
than
we
had
previously
disclosed
in
our
you
know,
respective
marketing,
announcements
and
stuff.
So
I
had
to
go
to
them
and
ask
for
permission
to
share
this
with
you
all.
K
Oh
yeah
John
Lennox,
like
maybe
you
know
when
your
comment
about
that
tells
me
you
know,
and
someone
asked
me
because
then
I
asked
you've
mentioned
you
disabled
all
the
TLs
and
srtp.
Was
that
just
for
practicality
or
was
that
a
policy
like
you
know?
We
want?
You
know
people
on
the
ground,
other
people
on
the
ground,
people
to
watch
it
live
or
no.
D
Is
they
that's
a
good
question?
The
disabling
of
TLS
was
for
practical
reasoning
out
to
eliminate
the
rent,
multiple
round
trips
and
eliminate
TCP
and
go
with
UDP,
but
for
srtp.
That
was
simply
a
convenience
thing.
That's
like!
Well,
we
don't
need
it.
It's
an
enclosed,
almost
like
a
closed
circuit
television
kind
of
network.
You
know
it's
not
connected
to
the
internet
at
all.
K
D
C
Know
so
so
we'll
point
out
that
if
we
were
doing
this
super
quick,
you
could
do
zero,
rtt
resumption.
If
you
started
the
sessions
before
they
left
the
Earth
that
then
you'd
always
just
be
resuming
something
that
you
already
had
set
up
with
the
tickets
anyway,
oh.
C
L
So
mosinati
just
wondering
if
so
this
was
obviously
for
Converse
or
for
human
to
human
use,
but
I
assume
that
there's
a
lot
of
video.
You
know
from
you
know:
Telemetry
systems
coming
back
and
maybe
even
you
know,
robotic
control.
That's
you
know
delayed.
You
know
a
lot
of
autonomous
within
20
seconds,
but
maybe
human
controlled
beyond
that
is.
This
is
some
next
steps
to
maybe
think
about
broadening
this,
to
use
this
core
real-time
communication
technology
to
go.
L
The
video
feeds
all
the
or
even
any
sensory
feeds,
not
just
the
video
but
any
kind
of
sensory
data,
that's
streaming
back
to
ground
stations
and
then
maybe
something
that
has
a
20.
Second,
you
know
reaction
time
that
could
usefully
feedback
and
and
change
the
sensory
systems.
Is
there
any
future
work
to
look
at
doing
something
like
that?
I.
D
I
don't
know
we
had
speculated
on
some
things
like
that
too,
like,
for
example,
somebody
on
the
ground
could
say,
hey
Alexa,
you
know
turn
the
lights
on
or
turn
the
music
up
or
whatever
to
the
to
the
Alexa.
That's
running
in
the
Orion,
spacecraft,
right
and
I.
Don't
know
if
this
is
true
at
all.
D
This
is
a
total
rumor,
but
I
thought
that
I
understood
that
they
also
had
a
GoPro
mounted
inside
the
spacecraft,
pointing
at
the
iPad
so
that
from
the
ground
they
could
see
what's
being
displayed
on
the
iPad.
So
there
are
other
systems
and
then
I
don't
know
about
the
actual
spacecraft
itself
and
all
of
the
Telemetry
and
instrumentation,
and
all
of
that
on
the
spacecraft
itself.
I
have.
M
While
you
were
talking
I
I,
wondered
I,
know
some
of
these
WebEx
and
things
have
the
ability
to
do
closed
caption
real
time.
Do
you
use
the
closed
captioning
in
real
time?
Did
you
try
doing
that
because
I
think
that
would
be
quite
useful
because
of
the
time
delay
being
able
to
refer
to
the
text
so.
D
That
that's
a
fantastic
question
and
there's
a
challenge
with
that
which
I
can't
speak
to
authoritatively
because
I'm
not
on
the
Amazon
Alexa
team,
but
they
had
the
same
challenge
as
that
you
know.
Normally
the
device
is
able
to
interpret
the
Wake
word
so,
but
then
all
of
the
actual
you
know
automatic
speech,
recognition
and
all
of
that
stuff
is
done
back
in
in
the
cloud.
Well,
this
system's
a
closed
system.
There
is
no
Cloud.
There
is
no
back
end,
so
they
had
to
build
a
custom
version.
D
My
understanding
is
Amazon
had
to
build
a
custom
version
of
Alexa
to
do
a
lot
of
that
locally
and
we
would
have
the
same
problem
within
the
WebEx.
App
is
for
real-time
closed
captioning
and
real-time
translation.
We
rely
on
back-end
systems
to
do
all
of
that
ASR
and
text-to-speech
and
natural
language,
processing,
NLP
and
so
forth.
D
C
E
Worries.
Okay,
sorry,
I
just
had
a
quick
thought.
You
said
that
the
terrestrial
WebEx
clients
are
just
normal
ones
right,
so
did
you
ever
think
about
relaying
the
cloud
processing
through
those
so
that
you
can
do
all
the
captioning
of
those
kinds
of
extra
Services,
because
those
aren't
in
the
closed
Network
only
I.
Imagine
they're
kind
of
fuel
honed.
D
So
they
are
on
the
closed
Network.
My
understanding
so
I
think
the
entire
system
is
not
connected
to
the
internet
is
what
I
was
led
to
understand.
Oh
so
I
think
it
would
still
be
a
challenge
to,
but
but
yeah,
if
you
could,
you
know
if
those
could
be
dual
interfaced
with
one
leg
and
the
NASA
Network
and
another
leg
connected
to
the
public
internet,
then
absolutely
we
could
do
something
like
that.
Yeah
yeah.
C
Okay,
all
right.
A
I'll
remember
whether
it
was
in
your
slides,
but
do
you
sort
of
look
at
FEC
for
some
of
your
Transmissions,
because
it
seems
like
a
kind
of
useful
thing.
D
You're
right,
that's
a
very
good
question
and-
and
we
should
have-
we
did
not
use
FEC
just
for
this
simple
fact
that
we
didn't
implement
it
in
that
custom
version
of
the
app
but
you're
right
in
normal
WebEx.
We
absolutely
use
FEC
and
we
also
use
now
RTX
re-transmission
and
in
combination
so
like
we
can
dynamically
switch
between
Forward,
Air
correction
and
RTX
re-transmission
and
we
can
protect
the
RTX
packets
so
that
the
re-transmissions
have
a
higher
chance
of
getting
through.
D
A
I
guess
it's
sort
of
to
drive
that
potentially
one
it
might
be
useful
to
understand
what
the
loss
patterns
are
around
the
transmissions
and
what
drives
those,
whether
it's
solar,
storms
or.
A
A
A
C
Okay,
so
let
me
let
me
say
thank
you
and
let
me
also
say
this
is
a
wonderful
Blast
from
the
Past.
For
me,
my
very
first
ietf
I
was
here
is
a
representative
of
a
contractor
to
NASA
helping
out
with
the
very
very
early
days
of
the
web,
so
seeing
this
work
come
here
and
get
talked
about
picking
up
some
of
the
work
that
was
done
here
in
webrtc,
some
of
the
work
that
many
of
the
companies
here
have
contributed
to
an
ab1
and
aomedia.
C
It's
really
great
to
see
how
far
we've
come
and
how
close
it
looks
like
the
experience,
they'll
have
and
Artemis
two
and
three
when
they
become
crude
missions
to
what
we
can
give
our
customers.
So
that's
great.
Thank
you
very
much
for.