►
From YouTube: IETF105-HACKATHON-20190721-1400
Description
HACKATHON meeting session at IETF105
2019/07/21 1400
https://datatracker.ietf.org/meeting/105/proceedings/
C
C
D
D
D
So
we
needed
also
to
create
some
sort
of
unified
handling
of
quick
versions,
because
we
now
start
see
a
lot
of
different
versions
and
different
uses
of
these
experimental
bits
and
yeah.
That's,
basically
what
we
try
to
do
and
what
cut
down
was
that
we
made
a
table
driven
version
of
experimental
and
non-experimental
quick
versions.
D
We
implemented
two
drafts
of
these
lost
measurement
bits
and
we
are
currently
developing
sort
of
reporting
of
loss
measurement
events
in
our
tool,
we're
also
integrating
this
into
some
testing
environments
using
it,
for
instance,
mignonette
test
vm
that
we
can
use
to
test
a
lot
of
different
networks,
yeah
scenarios.
Basically,
we
also
did
a
bunch
of
bug
fixes.
D
This
is
a
list
of
how
we
are
handling
different,
quick
versions.
So
previously
we
had
kind
of
nasty
structures
where
different
behaviors
based
on
different
quick
versions.
Basically,
we
had
go
into
every
function
and
check
which
person
list.
Now
we
have
generalized
this
quite
nicely
so
so
that
we
can
add
new,
experimental
versions
with
new
support
for
different
header
formats,
etc
and
and
yeah
make
it
much
more
dynamic.
D
D
So
we
have
a
bunch
of
process
in
there.
What
did
we
learn
was
that
supporting
all
quick
versions
is
quite
demanding
it's
quite
demanding,
especially
when
we
have
a
protocol,
that's
evolving,
and
we
have
a
lot
of
experimental
proposals
to
have
a
nice
structure
of
handling.
You
know
all
of
these
different
cases.
D
We
see
that
both
these
loss
detection
proposals
have
measurements
in
real
networks
and
we
hope
to
be
able
to
to
facilitate
more
measurements
of
this.
But
yeah
problem
is
that
we
only
have
two
reserved
bits
and
to
reserved
bits
for
two
proposals
is
quite
not
that
much
so
this
was
done
with
the
me
mark,
sealer
yuriko's
investor
Fabio
morrow
in
Alexandria.
You
can
find
our
tool
it's
been
dumped
at
at
github
and
the
new
measurements
proposals
you
can
find
at
these
links
so
yeah.
That's
it.
C
E
Hi,
my
name
is
Sachin
vishwaroopa
I'm
from
Cisco
system
and
myself
and
of
us.
We
worked
on
the
PTP
notification.
I
am
for
the
first
time
for
ITF,
as
you
can
see,
as
you
can
see,
I
will
not
use
the
format
and
maybe
next
time,
we'll
follow
the
same
thing
so
essentially
in
Cisco
I
work
on
IP
fabric,
as
some
of
you
may
or
may
not
be
aware.
But
these
days
the
paradigm
on
the
media
also
is
changing.
E
It's
moving
away
from
the
standard
SDI
to
IP
based
fabrics
and
that's
what
we
were
called
one
of
the
key
things
there
is
the
PTP.
There
is
the
synchronization
between
the
your
media
gateways
endpoints,
as
well
as
the
video
audio
sync
is
very
critical
and
the
accuracy
needs
to
be
less
than
400
nano.
Second,
and
that's
the
reason
we
use
PDP
with
the
PDP.
Today
we
actually
have
as
I
notice.
You
have
the
standard
for
the
young
already
defined
like
RFC
85-75.
E
But
what
is
more
critical
for
us,
too,
is
to
get
the
notification
and
that's
because
of
the
number
of
sinks
which
are
involved
in
the
PTP
in
the
PTB,
like
in
the
precision
time
protocol
in
a
second,
we
typically
think
eight
times.
So
we
cannot
expect
network
management
system
to
sync
and
find
out
the
deviation
from
those
samples,
because,
if
you
think
about
one
day
for
a
single
switch,
we
generate
around
700,000
the
sample
points,
and
so
we
want
to
do
it
in
a
distributed
fashion.
E
E
Okay,
this
slide
again
talks
about
the
synchronization
I
mean
earlier
when
we
started
on
the
audio
video
right.
If
you
are
using
a
lower
signal
or
maybe
think
about
like
ultra
HD
HD
4k
8
K
now
the
buffer
cannot
be
that
big
right.
So
that's
reason:
synchronization
is
critical.
I
put
some
more
things,
but,
more
importantly,
you
can
think
like
audio
and
video
Nate
green
thing
and
that's
what
we
have
been
working
on.
E
So
these
are
the
use
cases
what
we
wanted
to
address
it
in
the
hackathon,
so
essentially
for
the
live
event.
We
wanted
to
monitor
and
monitor
our
notification
again
depending
upon
the
media
profile,
the
the
duration
and
the
stake.
All
those
parameters
need
to
vary
and
that
second,
we
wanted
to
do
the
configuration
as
well
as
generate
the
notification
sure.
E
So
in
the
hackathon
we
have
the
deliverables.
We
defined
today
make
a
PDP
an
notification
model.
Again
we
need
to
review
with
the
team.
We
develop
a
third
party
application
on
Cisco
switch,
so
the
real
deliverable
here
was
a
Python
script
which
will
consume
on
the
switch
and
push
it
away
our
notification
to
the
existing
product.
We
have
like
an
environment
solution
and
we
extended
that
to
introduce
a
new
REST
API
to
consume
that
notification
and
overlay
the
PTP
information
on
top.
E
So
it's
kind
of
pie
chart,
but
I
am
sharing
the
slide.
So
you
can
see
the
example.
I
will
pay
load
as
well
as
notification,
and
this
is
how
the
user
interface
looks
like
here.
What
you
are
seeing
is
fine
leaf
topology
with
Cisco
switches
and
based
upon
the
PTP
offset
threshold.
Those
witches
at
the
runtime
are
color
coded,
so
based
on
the
notification
which
WebSocket
it
dynamically
updates
the
screen
so
based
on
the
number
of
sample
which
has
deviated,
we
actually
color
code.
Those-
and
this
was
just
idea
just
to
demo.
E
That
part-
and
this
is
the
back
end
where
we
introduced
like
a
new
application
on
the
back
end
with
the
Python
trip
which
completely
integrates
with
the
Cisco
CLI.
So
it's
as,
if,
like
a
original
part
of
the
Cisco,
it's
coming
from
Cisco
and
you
can
monitor
and
control
the
notification
part
of
it.
So
that
was
the
idea.
The
whole
idea
is
to
take
the
notification
and
integrate
into
the
PTP.
Thank
you.
F
So
this
was
a
project
to
do:
IO
am,
which
is
in
situ
om
operations
and
management.
Basic
idea,
if
you're
not
familiar,
is
to
have
an
ipv6
extension
header
hop-by-hop
option
that
contains
an
IO
am
option,
which
is
information
that
the
router
fills
in
as
the
packet
goes
along
its
path,
so
the
ideas
were
taken,
metrics
and
performance
measurements
from
routers
in
a
path.
So
the
goal
we
had
today
in
yesterday
was
to
implement
something
and
show
some
interoperability.
F
There
was
a
couple
of
drafts
on
the
IO.
Am
one
is
on
the
specific
option.
When
is
on
the
data
format
and
what
we
did?
We
brought
up
UDP,
paying
that's
a
little
program
to
you
to
a
UDP
ping
that
sets
the
extension
header
and
the
IOM
option,
and
we
had
a
client-server
one
router
and
we
were
able
to
follow
the
path
and
have
the
information
filled
in
the
kernel
implementation
or
the
router
was
provided
by
Justin.
That's
at
this
github
and
separately.
The
client
in
the
server
code
was
a
different
implementation.
F
All
of
these
were
on
linux.
Hopefully,
next
IETF
hackathon
will
have
some
more
router
or
host
implementations
join
in
so
the
way
this
works
for
what
we
do,
we
ping
a
remote
host
add
a
few
options
and,
as
you
can
see,
we
got
some
response
back
and
we
parse
the
IO
am
message
that
we
got
back
and
sure
enough.
The
router
filled
it
in
so
more,
interestingly,
is
the
note
information.
This
is
directly
from
the
IOM
draft
various
pieces
of
information,
so
I
have
the
egress
interface
and
Gress
interface.
Timestamps
transit
delay.
F
We
did
learn
a
few
things
particularly
trying
to
to
get
things
dinner,
operate,
getting
the
lengths
right,
we're
parsing
fields,
particularly
fields
that
hold
lengths
that
correlate
to
other
lengths
that
was
kind
of
interesting
bit
fields.
Don't
make
things
easier
in
this
regard,
especially
when
they're
split
across
byte
boundaries,
and
we
also
have
a
few
suggestions
to
I
ppm,
particularly
in
some
of
the
data
formatting
cake
data
format,
for
instance,
it's
a
lot
easier
to
deal
with
fixed
fields
than
variable
length.
F
G
Okay,
so
just
some
background
information
on
island
P,
we
had
the
first
demo
of
what
will
eventually
be
a
public
release
of
the
code
at
the
last
ITF,
and
we've
just
been
developing
that
especially
trying
to
fix
some
bugs
that
we
found
in
a
tie
T
of
1:04,
which
was
very
useful
to
to
know
about.
So
the
plan
here
was
really
to
make
sure
that
Island
P
could
work
over
a
real
network.
That's
the
idea
eventually,
and
so
what
we
had
was
a
network
that
consisted
of
some
low-end
rooters,
but
they
are
just
commercial
rooters.
G
They
run
ipv6.
Only.
The
idea
is
that
Island
P
works
completely
end
to
end,
so
the
core
rooters
just
think
you're
running
ipv6,
whereas
actually
you're
running
Island
P
and
the
other
thing
that
came
through
a
fortuitous
conversation
with
Stefan
BOTS
mayor
was
some
DNS
improvements,
help
with
DNS
in
general
and
I'll,
say
a
little
bit
more
about
those
in
a
slide
coming
up.
G
So
these
the
key
things
that
we
managed
to
work
out
today,
we
did
some
test-
runs
with
TCP
/
Island
P
running
over
these
commercial
rooters,
between
two
boxes
that
were
running
an
island,
P
modified
Linux,
kernel
and
but
running
over
these
commercial
rooters,
and
we
had
some
discussions
on
fixing
a
possible
issue
with
DNS
additional
information
processing
and
that
was
actually
fixed.
So
that
was
a
good
outcome.
I
spoke
to
Stephan
BOTS
man
after
I
uploaded.
G
These
slides
I
should
also
thank
so
I
didn't
put
his
name
in,
but
Peters
bachik
actually
did
the
coding
to
put
this
fix
into
one
of
the
the
DNS
service.
So
thank
you
for
that
as
well.
This
is
the
demo
that
we
had
running.
We
had
the
it's
not
really
easy
to
see,
but
you've
got
two
boxes
at
the.
What
is
your
right
hand
edge
running
the
island
P
code
and
in
the
middle
we've
got
four
little
edge
rooted
boxes.
G
Those
are
r1
r2
r3
marked
in
the
logical
diagram,
and
we
emulated
a
mobile
node
moving
across
them.
So
no
mobile,
IP
they're
just
unicast
routing,
and
what
happened
is
that
as
the
mobile
node
moves
across
this
running,
a
TCP
flow
from
the
blue
node,
the
correspondent
node,
while
it
moves-
and
we
just
wanted
to
see-
can
Island
P
do
what's
meant
to
do,
which
is
to
change
its
location
seamlessly
as
it
moves
across
those
different
networks
and
the
results
we
had
showed
here
on
this
graph.
G
The
individual
throughput
on
the
network
shown
on
the
top
three
facets
of
the
graph
and
on
the
bottom
graph
is
just
the
aggregate
throughput
you
see,
of
a
correspondent
node,
so
that
was
pretty
good.
We
got
a
consistent
TCP
flow
running
across
those
commercial
routers,
running
Island,
P
and
to
end
they
were
just
running
unicast
routing,
but
we
had
a
mobile
node.
This
was
work
that
was
done
mainly
by
my
PhD
student
who's.
G
H
Hi
hi
I'm
Theresa
I'm,
presenting
for
the
new
taps
table,
which
is
for
transport
services,
just
a
quick
recap.
What
taps
is
so?
We
are
developing
a
sort
of
an
abstract
API
for
different
transport
protocols
and
those
are
just
the
Transfer
Protocol.
It's
our
current
PI
tabs
implementation
supports.
Of
course
it
would
be
nice
to
have
Quicken
there
as
well.
H
So,
of
course,
those
are
not
really
equivalent,
but
if
we
have
transport
protocols
that
are
sort
of
provide
the
same
features
and
we
can
try
them
at
the
same
time
and
sort
of
go,
have
the
eyeballs
on
them
and
we
are
working
on
that
right
now.
Also,
we
are
working
on
getting
multicast
to
work,
which
is
kind
of
work
in
progress.
Also,
we
have
a
nice
interesting
concept
called
framers.
H
So
the
idea
is
you
get
a
byte
stream
from
key
city,
but
then
you
have
a
sort
of
a
delimiter
that
limits
your
byte
stream
into
messages,
and
this
is
a
concept
that
has
been
added
to,
or
that
has
been
expanded
in
the
recent
draft,
and
so
we've
been
discussing
framers
a
lot
and
there's
also
going
to
be
a
more
discussion
on
this
concept
in
the
working
group.
So
we
have
some
feedback
because
in
our
implementation
we
have
implemented
it,
and
some
parts
were
unclear.
H
Also
we're
going
to
discuss
how
much
we
of
this
we
have
to
specify
from
the
tests.
We
could
fix
some
box
in
our
implementation,
obviously,
but
also
we
are
sort
of
modeling
the
input
that
we
get
from
the
application
and
maybe
also
we're
going
to
model
the
output
sort
of
the
resulting
connection.
So,
let's
see
where
this
leads
in
terms
of
comparing
different
implementations
and
don't
also,
we
have
some
other
minor
additions
to
the
draft,
so
the
people
who
were
there
the
entire
week
and
are
mostly
Jake
Max
and
me
also.
I
I
I
So
this
is
our
during
the
weekend
we
work
together,
Tim
Porter,
so
this
figure
shows
I
and
SF.
So
this
is
a
nice
I
to
another
frame,
work
or
project.
So
so
you
may
be
a
panel
reader
this
one.
So
this
time
we
embarked
with
commercial
fiber
NSF.
So
here
we
is
fire
and
also
we
used
previously.
We
used
the
Rakata
open
source,
the
red
bitter.
So
this
time
we
combined
the
commercial.
I
Our
power
and
open
source
should
be
cut
up
for
web
filter,
together
on
top
of
commercial
public
cloud
system
developed
by
the
ATR
in
Korea
and
so
to
slide
a
demonstrate,
a
copper
over
here.
You
can
see
demonstration
here,
so
the
register
to
NSF
for
features
and
also
consumer
I
to
pay
she
used
it
to
deliver
the
security
policy
in
the
high
labor
point
of
view,
and
then
security
controller
translate
halide
policy
into
local
policy.
So
this
is
a
see
you
can
see.
I
A
one
you
see
is
the
one
you
get,
so
we
provide
the
user
interface
to
easily
configure
that
12
script
functions
using.
That
is
test
boat,
yeah,
okay,
so
we
provide
the
two
or
scenarios
filtering
and
we
return.
So
this
time
we
prove
our
concept.
I
told
except
interfaces
are
watching
on
top
of
a
commercial
platform.
Also,
we
show
that
the
translator
is
a
working
well
so
well
tomorrow,
hack
demo,
our
where
we
can
attempt
to
rate
will
be
happy
with
the
European.
Thank
you
for
your
listening.
Thank
you.
I
I
Sorry,
six
unit
module
and
the
sender
as
our
v6
model
is
under
development
by
IDF,
so
we're
looking
forward
a
pass
to
support
its
operators
to
controller
use,
a
TF
Yamato
to
interact
with
the
vendors
native
young
to
and
that
to
debridement
and
lacks
a
device,
and
so
the
project
here
is
we
using
the
an
support,
a
book
and
I?
Guess
the
only
the
modules
to
configuration
to
config
as
re
16
and
implement
it's
a
key
features
in
I
have
as
a
risk
model.
Also
we
want
to
development.
I
I
Okay,
say
said:
what
we
can:
we
can't
work
on
what
we
got
known
and
then
we
implement
a
three
unsupervised,
one
for
us,
our
basic
global
and
as
our
sorry,
six
and
that's
our
with
six
within
mode,
and
we
also
develop
them
and
app
to
allow
it
remains
ITF
model
to
the
Wonder
native.
Your
motive,
you
can.
A
I
I
C
C
The
major
draft
is
the
first
one
shown
here
and
we
have
a
few
continuation
drafts.
So
what
we
got
done
this
weekend
was
merge
several
feature
branches
in
our
project.
We
had
had
separate
developments
over
the
last
few
months
and
which
resulted
in
basically
the
compression
being
in
one
branch
and
fragmentation
in
another
branch,
and
so
we
merged
that
so
that
fully
integrated
and
we
got
the
basic
tests
running
again
and
she
needs
to
be
ironed.
J
C
Yet
and
one
of
the
branch
provides
in
the
new
fragmentation
mode
that
was
introduced,
last
fall,
including
the
extensive
tensile
testing
of
that
we
I
did
a
few
other
functionalities,
simple
OAM
stuff,
like
ping
responses
and
all
that
stuff,
so
that
one
major
achievement
and
the
other
one
is
making
the
the
project
easier
to
use
for
newcomers.
So
we
created
a
user
guide
how
you
we
run
the
code
simply
when
you
get
started.
C
We
also
when
we
really
want
to
lower
the
adaption
barrier
to
this
project
so
that
newcomers
can
get
used
to
it
without
draining
too
much
of
the
resources
of
the
old
timers
and
also
yeah.
We
want
to
provide
complete
examples
and
we
want
to
become
the
reference
implementation
for
shakey
and
that's
our
team,
ten
members,
when
new
hackathon
member
free
people,
remote
from
Japan,
Spain
and
Chile,
which
allowed
us
to
run
24/7
over
this
weekend
by
having
the
Japanese
working
while
we
were
sleeping,
that's
it.
Thank
you.
Okay,.
B
I
The
second
one
is
a
bit
cooler,
neighbor
discovery
with
the
address
laceration
and
multi-hop
D
ad,
and
also
we
take
advantage
of
intermediate
of
acres
in
banette
radius
D
ad
time
also,
we
can
short
the
English
initialization
of
TCP
a
little
patrons,
and
so
we
proved
the
to
attract
or
she
be
and
also
each
one
able
discover
attract.
So
this
is
a
poster.
This
is
the
total
13.
So
this
is
a
figure,
is
at
the
picture
and
at
all
architecture,
so
you
can
see,
liquor
can
communicate
each
other
using
b2b
also
communicate
the
p2i.
I
So
our
idea
is,
you
can
see
the
beaker,
even
though
it
is
not
communication
range
or
lotus-eyed
unit.
This
is
a
providing
Internet
connectivity
to
beaker,
so
it
can
initiate
the
ad
using
intermediate,
beaker
and
register
using
multi
activity
and
also
it
once
it
configure
with
the
or
global
IP
other.
Otherwise
it
can
start
the
TCP
UDP
connection.
I
So
we
are
probably
using
simulation
this
figure,
so
the
zoom
up
for
a
loader
simulator,
and
also
we
use
the
oil
net
for
the
truck
simulator.
So
you
can
see
we
using
a
3
hub,
multi,
happy
ad
to
reduce
the
tid
delay
and
also
we
can
start
typically
a
TCP
connection.
So
this
is
a
portable
stack.
The
left-hand
side
is
that
you
can
see
variable
protocol
stack.
This
is
IP.
This
is
a
website
necessarily
protocol
for
safety.
I
So
we
implemented
before
a
logical
link
on
layer
and
IP
version
6
over
go
to
the
11
or
so
neighbor
discovery.
So
the
simulation
result
is
our
double
discovery.
Can
lead
use
the
legacy
our
neighbor
discovery,
so
yeah?
Okay?
So
we
during
the
weekend
wheel
along
the
probe,
concert
IP,
rightful
OCB
and
we
call
neighbor
discovery.
I
can
work
for.
The
note
here
are
the
queuing
Network.
So
you
can
take
a
look
at
other
material
for
Peter
cleave
and
also
get
help
olenka.
Thank
you
for
your
listening.
Thank
you.
A
K
K
Then
the
user
can
play
the
game
again
and
we
can
learn
more
about
the
user.
So
for
the
outcomes
we
have
begun,
organizing
our
system
to
take
into
account
these
user
preferences
and
there's
a
lot
more
discussion,
a
lot
more
data
needs
to
be
generated
before
we
can
implement
any
of
this,
and
we
have
some
questions
going
forward
and
I'm
happy
to
speak
with
anyone
who
wants
to
know
about
this
project.
Okay,
thank
you
very
much.
C
J
M
M
This
is
a
mechanism
by
which
manufacturers
and
network
administrators
can
learn
whether
or
not
their
devices
are
actually
putting
out
policy
recommendations
that
are
useful
to
those
devices
or
if
there's
miss
configuration
some
work
on
mud
maker,
which
generates
the
Jason,
the
the
guys
from
Syria
labs,
completely
redid
the
code,
which
was
nice
because
it
was
in
PHP
and
my
PHP,
which
is
really
bad,
and
now
it's
all
in
Python.
Thank
you
guys.
M
There
was
DPP
mud
integration
that
was
going
on.
There
was
a
verification
mechanism
that
was
being
developed
by
the
folks
at
NCC
OE,
and
then
we
had
some
grass
work
and
does
the
discovery
work
going
on
and
let's
see
here,
what
did
we
plan
to
solve?
Actually,
we
just
plan
to
all
get
together
and
figure
out
what
to
solve
and
that's
what
we
did
so,
as
I
mentioned
a
lot
of
I
think
it
covered
a
lot
of
this
ground
already
on
mud
run
mud
reporting.
M
In
fact,
I
have
a
patch
already
just
needs
to
get
committed,
so
we
had
a
lot
of
some
interoperability
testing
going
on.
We
had
a
couple
of
guys
here
from
c-calm
who
went
and
actually
implemented
mud
right
on
the
spot
in
their
devices
and
test
against
a
number
of
mud
managers
generating
themselves
a
mud
file
that
was
appropriate
and
tested
their
acts.
M
We
can
now
test
their
access,
and
then
we
had
some
additional
integration
going
on
in
terms
of
filters
for
east-west
or
north-south
in
terms
of
the
verification
code
and
yeah,
we
got
a
lot
of
work
going
on,
so
we
know
also.
We
need
to
fail
fast.
We
have
on
some
of
our
code
and
we
have
a
lot
of
work
to
do
on
the
mud
reporter.
Just
a
couple
of
screenshots
of
some
of
the
stuff
that
went
on
here.
M
This
is
the
the
thing
that
will
generate
mud
files
in
terms
of
the
verification-
and
here
you
got
here-
you
have
dark
and
our
gentleman
from
c-calm
and
in
terms
of
them
bringing
their
hardware
that
implemented
mud,
either
in
DPP
or
directly
using
things
and
here's
the
long
list
of
people
who
actually
did
a
lot
of
work
and
thanks
to
a
bunch
of
organizations
who
are
supporting
them.
Thank
you.
N
Okay,
so
hi,
this
was
a
spontaneous
project,
as
you
may
have
known.
There's
HP
v6
PD
on
the
heck
of
the
network,
and
we
were
chatting
with
people
and
I
put
together
code
for
fo
outing
to
capture
those
packets,
pick
up
the
appropriate
routes
and
install
them
that
wasn't
previously
possible.
Now
it
is,
and
that's
it.
C
O
Hi,
everybody
I'm,
sorry
didn't
use
the
the
format
in
Quebec.
We
call
that
being
a
distinct
society.
This
is
the
coin,
RG
p
for
hackathon,
and
this
was
our
first
one
as
you
can
see,
because
we
didn't
know
about
the
format
who
are
we.
We
are
actually
a
proposed
research
group,
we're
still
waiting
to
be
a
real
one,
but
we
want
to
look
at
everything
that
works,
that
deals
with
computing
and
the
network
and
investigating
this
whole
continuum
of
putting
computation
from
the
data
center
all
the
way
to
the
edge.
O
We
want
to
look
at
architectures.
We
want
to
look
at
protocols
and
want
to
look
at
real
world
use
cases,
and
this
is
the
reason
that
we're
having
this
hackathon,
because
there's
a
bunch
of
people
invented
a
language
is
called
p4,
which
is
currently
being
used
to
do
some
specific
programming
in
switches,
and
we
wanted
to
look
at
this
idea
of
this
cloud
to
edge
computing
continuum
and
p4.
We
didn't
have
a
specific
project
except
or
remote
participant.
O
Most
of
us
were
pretty
much
new
users,
and
because
of
that,
we
have
to
really
give
a
shout
to
the
company
Montreal
company,
who
sent
us
two
engineers
for
two
days
to
help
us
setting
up
our
environments
and
developing
the
code.
That
is
actually
at
the
end.
We
ended
up
doing
real
work,
which
is
like
yay,
so
what
we
did
and
yes,
we
me
and
pl
we
are
gone
but
hey.
Thank
you
guys.
We
did
the
basic
examples.
O
We
had
actually
one
very,
very
proficient,
sadly,
a
remote
participant
who
actually
implemented
and
started
implementing
an
ipv6
v6
switch
machine
learning
in
NP
4.
He
checked
his
code
in
the
in
the
github
and
it's
related
to
a
work
that
was
done
before
in
ipv4.
We
had
actually
we
actually
poached
people
from
other
tables
that
you
joined
us.
O
We
had
12
participants
at
the
end,
so
that
was
actually
pretty
surprising
for
us
and
the
people
we
poached
included
people
who
started
looking
at
p4
to
golang,
and
this
morning
we
did
packet
filtering
and
we
gathered
a
ton
of
information
and
I'm
always
done,
and
so
our
next
steps
we
want
to
continue
gathering
projects.
We
think
that
you
know
we
have
a
good
chance
to
become
a
real
research
group,
so
we
would
like
to
have
a
coin
interim
and
we
want
to
have
another.
O
P
Good
afternoon,
everyone,
so
the
measurement
analysis
for
protocols
research
group
participate
in
our
third
hackathon.
This
time
and
I'll
tell
you
what
we
were
up
to
the
problem
that
we
were
attacking
on
at
this
meeting
during
this
hackathon,
rather
was
to
produce
a
reference
implementation
for
doing
IP
address
aggregation.
P
Two
applications
of
this
are
doing
address
space
anonymization,
where
we,
where
you
a
granade,
say
you're
at
before,
address
2/24.
What
do
you
do
with
ipv6?
Another
application
of
it
is,
for
instance,
to
find
homogenous
populations,
for
instance,
for
content
delivery
networks
to
do
matchmaking
between
the
users
and
the
content.
The
specific
problem
solve
was
how
do
we
take
something
like
a
Patricia
tree?
If
you
use
Python
or
Pro,
you
know
this
as
net
Patricia
or
in
Python,
PI,
radix
or
PI
Trisha.
P
How
do
you
use
a
data
structure
like
that
to
represent
all
the
activity
in
the
entire
internet?
It's
too
big
when
you
have
tens
of
billions
or
hundreds
of
billions
of
v6
addresses,
so
so
the
the
to
solve
it.
We
decided
to
take
an
existing
code,
see
the
agree
tree,
which
is
an
implementation
of
patricia
tree
and
make
portions
of
the
tree.
Immutable
and
I'll
show
you
why
that
self,
the
problem,
but
basically
it
allows
you
to
solve
the
problem
by
partitioning
it.
You
can
partition
the
set
of
addresses
arbitrarily
into
small
sets.
P
You
can
put
them
on
a
cluster
and
you
produce
an
intermediate
result
where
you
can
capture
the
state
of
the
tree
as
you're
performing
some
operation
on
it
and
then
do
it
iteratively.
So
so
what
were
the
new
ideas?
And
what
did
the
team
agree
on?
Well,
the
team
it
turns
out
today
was
just
me,
so
we
agreed
on
everything
and
we
agreed
to
use
the
agree
tree
and
really
agreed
that
this
this
partition,
this
partitioning
problem,
could
be
solved
by
making
portions
the
tree
immutable.
P
So
that's
the
new,
the
novel
design
idea
in
a
patricia
tree,
the
github
upload
is
pending
and
I'll.
Just
show
you
exactly
what
it
was
because
we
managed
to
get
it.
We
managed
to
get
it
done
in
just
a
day.
So
what
we
got
done
so
he's.
Imagine
you
have
a
set
of
active
addresses.
Here's
ten
I,
give
you
six
/
64
s,
and
you
can
punch
them
up
into
one
of
these
trees
that
are
commonly
used,
it's
kind
of
like
a
routing
table,
so
we
put
the
tree,
we
put
them
in
the
tree.
P
An
intermediate
result
that
I
can
then
do
it
early
like
say
on
a
MapReduce
cluster,
with
hundreds
of
machines
and
then
so,
to
give
you
an
example
why
this
is
important,
you
probably
think
in
94
you
know
what
a
slash
24
is
and
I
pre
6.
Even
with
this
small
data
set
of
180,000
active
slash,
64
is
this,
shows
that
about
half
of
them
reach
this
sufficient
aggregation
at
slash
56,
but
another
half
of
them
needed
to
be
aggregated,
slash,
40
and
today
in
the
v6
internet.
P
A
lot
of
people
use
slash
48,
which
is
right
in
the
middle
and
a
horrible
compromise,
because
you
could
have
a
better
answer
or
you're,
not
not
aggregating
enough.
So
what
we
learned
is
is
that
this
is
a
candid
best
practice
and
we'll
carry
it
to
the
working
group
and
I
made
a
couple
design
other
design
decisions
again
that,
were
you
know,
man
unanimous
so
with
just
me:
I've,
based
on
a
publicly
available
open
source
code
from
some
colleagues,
including
candor
ocho,
and
we're
gonna
meet
on
Friday
and
I'll,
go
over
some
more.
Q
Q
Everybody
knows
how
busy
that
can
be,
and
an
additional
access
types
were
made
possible
by
our
volunteers
that
joint
and
I
just
want
to
mention
that
you
know
all
the
names
here:
Ryan
Hoffman
from
Telus
and
Ryan's
gonna
speak
a
little
bit
about
his
results
and
Timothy
Karlin
Marian
Dillon
and
Kyle
Kyle
it
all
from
UNH
at
the
interoperability
lab
thanks
so
much
for
joining
this
ok.
So
here
we
go
so
we
ran
the
tests.
Q
Q
We
go
back
and
measure
with
UDP
St
you
now
we
got
a
lot
closer
to
the
limit
of
one
gigabit
per
second
and
then
in
the
afternoon
on
Saturday.
Everybody
was
pounding
away
here
and
we
really
need
to
learn
the
signature
of
what
that
that
is,
and
my
time
went
away
what
the
hell.
Oh
there.
It
is.
Oh,
my
gosh,
it's
only
46
seconds
left
go,
go.
J
So
I
wanted
to
include
non
congested
links
so
set
up
a
connection
between
our
Telus
lab
and
Emmet
in
Alberta,
with
alles
New
Jersey
lab
to
perform
the
same
kind
of
test,
but
in
bulk,
so
using
two
servers
here
in
New
Jersey,
just
to
be
able
to
get
the
bulk
of
tests
that
we
needed.
Unfortunately,
the
server
in
New
Jersey
only
had
a
GUI,
so
this
shows
the
comparative
results.
J
A
consistent
near
gig
speed
result
with
the
UDP
speed
test,
as
opposed
to
the
TCP
test,
which
was
highly
variable,
really
important
information
for
us,
because
it's
difficult
for
a
technician.
That's
going
into
a
home
selling
a
service
and
using
that
test
to
reveal,
though
the
customer,
what
they're
achievable
speed
is
and
it
being
subpar.
Q
The
UNH
folks
walked
in
this
morning,
got
this
test
running
and
and
and
resolved
a
problem
with
their
router
screening
in
the
firewall
on
UDP
traffic
and
made
it
work
properly
right
after
that,
it
was
a
great
effort
in
just
a
few
hours
here
this
morning
and
we
learned
a
lot
of
stuff
for
potential
development
and,
and-
and
you
know
you
can
learn
a
lot
from
testing
different
access
types.
That's
for
sure
thanks
very
much.
R
R
I'll
make
this
really
fast,
so
we're
here,
working
on
making
discovery,
work
with
less
use
of
multicast,
because
multicast
when
you're
making
discovery
work
with
less
reliance
on
multicast,
because
multicast
is
slow,
its
unreliable,
it's
wasteful
of
shared
wireless
spectrum.
There's
a
list
here
of
the
drafts.
The
discovery
proxy
is
based
on
the
hybrid
draft
which
uses
DNS
push
notifications,
which
in
turn
builds
on
DNS
stateful
operations.
We've
been
building
a
code
for
open
wrt
running
on
these
little
GLI
net,
AR
750s
little
pocket,
gigabit
router.
R
I
was
here
working
with
ted
and
barbara
joined
us.
Thank
you,
barbara.
We
did
a
bunch
of
work
with
integration,
open,
wrt
package
management
dealing
with
asynchronous
change
notifications
with
you
bus
to
really
polish
this
code.
This
is
all
available
on
the
ITF
hackathorn
github
and
we
now
have
pre-built
packages.
You
can
download
it
yourself
and
run
this
and
in
about
five
minutes,
have
your
own
discovery
proxy
running
at
home.
Thank
you.
T
My
name
is
Alex
Anwar
and
it's
a
very
difficult
name
to
say
so.
People
call
me
dr.,
Alex,
I
will
represent
the
team
here
about
web
out
is
Hugh
about.
This
is
a
technology
to
bring
real-time
communication,
audio
video
and
data
to
the
web,
and
it
has
an
IETF
pendant,
which
is
RTC
web
for
all
the
protocols,
the
encryption,
the
security,
the
codecs
and
so
on.
T
The
last
missing
piece
is
called
simulcast,
which
is
the
capacity
to
send
a
different
resolution
of
audio
and
video
simultaneously
over
the
wire
to
finish
this
back
at
the
belief
receipt.
So
some
of
us
came
around
here
today
to
try
to
push
that
so
that
we
can
finally
have
a
finalized
spec
and
people
can
implement
product
on
top
of
them.
We
had
two
browsers
represented
today:
Firefox
and
Chrome.
T
The
two
others
were
excused
for
visa
reason
and
other
things
we
had
free
media
server
represented
to
give
feedback
on
implementation,
which
is
also
very
important
and
finally,
free
application.
Vendors
that
were
using
both
browsers
and
media
servers
to
help
communicate
about
the
needed
and
missing
functionalities
and
different
bugs.
T
T
We
went
through
ten
different
bugs
in
different
browsers,
and
we
also
helped
different
vendors
implement
similar
cast
in
their
media
server
or
at
least
made
progress
there
and
provide
feedback
to
to
the
missing
pieces,
so
all
in
all,
very
efficient
sation
and
we're
very
happy,
and
we
made
a
lot
of
progress
in
two
days
that
would
otherwise
not
have
been
possible
without
the
opportunity
to
have
a
face-to-face.
The
agathon
gave
us
so
thanks
to
the
sponsor
and
things
to
charge.
U
U
As
we
know
there
are
already
had.
There
have
already
been
defined
for
data
types
in
ITF
draft
IOM
data,
including
to
type
Tracy,
Tracy
type,
1
beauty
and
the
luster
is
a
H
2
H
type.
Now
we
will
define
another
new
one,
treating
type
called
we
call
it
puts
the
card
base
at
elementary.
So
what's
the
different
between
the
post
card
baster
elementary
and
our
am
tracing
type
at
first
we
separates
the
elementary
instruct
instruction
header
and
the
metadata.
U
U
Why,
then
I
will
try
why
we
introduced
this
new
type,
because
we
list
three
reasons.
The
first
is
detect
use
this
type
tracing
type.
We
can
detect
the
location
of
the
packet
loss
and
and
then
we
can
solve
the
encapsulator
encapsulation
list
with
the
fixed
packet
header,
sorry
little
nervous
and
and
the
last
one
is
a
different.
The
queues
priority
from
for
the
metadata
from
user
traffic
and
then.
U
U
U
This
page
show
this
project
in
hexa
at
first
there
are
network
domain
include
for
Reuters,
and
a
tester
will
send
to
test
the
flows
and
and
also
receive
this
to
test
of
flaws
in
ipv6
transport
as
the
transport
protocol,
the
Rooter
for
as
the
inquest
node
to
encapsulate
the
I
feed
header
and
the
Reuters
3,
as
as
eQuest
Note,
2
Inc.
They
capture
leads
to
the
I
feet,
header
and
Rooter
6,
and
the
Reuter
file.
As
the
transit
knows,
the
yellow,
yellow
one
is
a
matter.
Data
is
collected
to
the
collector.
U
U
S
Stand
like
this,
keep
it
tight
the
floor
here.
So
the
floor
is
Dina
steams,
so
the
the
Dina's
table
was
quite
eclectic
group
of
people.
It's
like
the
the
DNS
protocol.
Probably
so
we
did
something
about
Dina's
privacy,
dinner,
support
for
specific
networks,
provisioning
and
miscellaneous
stuff.
The
catch-all.
S
So
the
Dina's
privacy
work
we
worked
on
was
zone,
transfers
of
TLS
shot
and
cert.
So
you
want
to
protect
your
zone,
it's
encrypted,
etc.
Salt
is
the
push
model.
Certain
kind
of
subscription
model,
I'm,
sorry
yeah
as
a
subscript
support,
dough
proxy
plugin
for
any
web
server
by
pitter.
It's
a
far
cgi
plug-in
interface
and
means
of
preparations
for
dots
and
I'll.
Invite.
So
there's
a
lot
been
discussions
on,
though,
in
DNS
community
and
about
a
decided
everything.
S
That's
choice
for
end-users
and
deployment
are
important,
so
I
think
there's
a
good
work
that
we
include
this
dough
support
in
different
pieces
of
software,
good,
the
dinner
support
for
specific
network,
so
DNS
is
kind
of
the
Swiss
Army
of
the
Internet.
Of
course,
I'm
working
dinner,
so
I'm
I'm
have
some
specific
view
on
this,
but
also
for
ideal
and
P
presented
already.
S
It
has
to
be
some
middle
box
that
has
to
do
some
translation,
so
does
the
Dinah's
prefix
this
coffee
by
mark
implemented
in
bind
again
DNS
as
a
provisioning
tool
here
so
for,
if
you
want
to
do
something
with
any
cost-
and
you
don't
want
to
create
an
plus
my
gun
for
DDoS
attacks,
you
want
to
have
an
any
cost
open
resolver
with
something
like
a
DNS
server
cookie.
So
you
protect
your
open
V
cursor
for
DDoS
attacks
with
spoof
addresses.
S
This
is
implemented
in
bind
and
inbound
and
is
D.
Another
provisioning
thing
is
temporary
records
in
the
DNS.
Sometimes
like
the
less
encrypts
ACM
of
the
Acme
protocol,
you
want
to
publish
some
info
base
for
a
short
time
in
your
dinner
zone,
so
you're
the
owner
of
a
domain
name
for,
gets
your
certificate.
You
have
the
timeout
resource
records.
After
that
the
information
is
removed
from
your
zone.
Another
important
thing
is
the
HCP
s
SVC
it's
kind
of
a
service
records
and
it
has
been
a
long-standing
well
problem
to
solve.
S
Actually,
so
how
do
you
provision
your
web
service
and
how
do
you
adjust
them
in?
Your
DNS
has
been
the
number
of
solutions
over
the
years
by
the
DES
community
by
the
HCP
community,
and
this
proposal
seems
to
be
received.
This
wrist
proposals
received
positive
feedback
from
both
working
groups.
So
it's
a
lot
of
interest
here
and
there's
envy
sorry,
there's
an
implementation
in
Burma
in
unbound,
the
miscellaneous
gets
all.
We
did
some
work
on
llamo
formats
in
DNS
packets,
the
original
RFC
is
actually
about
Jason,
but
their
origin,
author
of
their
Seif,
sighs.
S
Well,
Jamal
is
fine,
it's
readable
and
it's
already
in
use
in
the
proof
of
concept
of
root,
server
measurements
frame.
That's
also
wrapping
up.
We
did
a
lot
of
interrupts
between
ourselves
between
different
groups.
The
ILP
group,
the
web
community,
I
think
we
have
done
some
good
work.
That's
all,
and
these
are
yeah.
V
V
This
is
a
l4s
going
on
in
the
transport
area
and
TPM
NTSB
WG
right
a
bit
of
background
here,
but
I'm
looking
to
dwell
on
it.
There's
the
where
our
code
is
all
linked
from
and
the
specs
we're
working
to
had
got
number
people.
We
actually
expected
to
have
more
nearly
all
remote
and
hardly
anyone
here,
but
it
worked
out
the
other
way
around.
V
Something
like
seven
newcomers,
which
was
pretty
good
and
quite
a
few
projects.
We
didn't
expect
I'll
jump
to
the
next
time.
Then
I'll
come
back.
We
did
plan
something
something
that
didn't
happen
with
a
bunch
of
people
remote
that
were
all
new.
Just
it
became
impractical,
it's
time
in
India,
basically
and
didn't
quite
get
the
finishing,
but
going
back
quite
a
few
projects
to
brought
a
testbed
with
us
God
they
all
set
up
found.
V
There
were
problems
with
latest
limits,
Colonel
screwing
up
what
we
had
intended
to
do,
etc
had
to
rebuild
things
blah
blah
Richard
got
on
well
with
Michael
Tilson,
implementing
accurate
Eastern
in
FreeBSD,
with
Michael
dukes
and
helping
there
was
also
I.
Suppose
the
highlight
really
was
the
l4s
testbed.
We
had
the
SC
people
come
over
and
give
us
their
um-flint
where
they
wanted
us
to
evaluate
on
it.
We
started
working
together
on
that
which
will
probably
continue
during
the
week.
That's
right
and
I'll.
Now
come
on
to
that.
V
Also,
the
ns3
implementation
fast
start
was
added,
which
made
good
progress
on
the
freebsd
implementation
which
didn't
exist
before
this
got
the
handshake
and
the
feedback
working
and
the
protocol
to
packet
and
packet
drill,
and
we
built
a
good
work
relationship
with
the
SCE
team.
That's
we
the
L
first
team,
but
now
we're
we're
the
L
frozen
SCE
teams
and
what
we
learned
well.
V
V
We
now
question
one
of
the
or
the
most
recent
change
you
made
to
the
spec.
Having
tried
to
implement
it,
so
we
may
go
back
on
that,
but
we
rethink
it
and
discovered
that
a
counter
that
crosses
a
boundary.
Obviously
we
knew
it
crossed
by
boundary
that
just
made
it
started.
Thinking
about
cross,
compiling
and
stuff
made
it
a
bit
more
challenging
to
make
sure
that
would
compile
correctly
and
also
learned
that
remote
attendance
of
newcomers
that
our
hackathon
doesn't
really
work.
Yep.
E
W
Okay,
thank
you.
So
this
is
a
report
from
the
wiki
hugging
activity
here
at
the
IETF
hackathon.
So
we
see
a
work
on
IOT,
semantics
and
hyper
media
interoperability
is
a
long-running
activity
at
the
IETF
is
already
our
sixth
hackathon.
We
are
in
Tingting
research
group
but,
of
course,
spanning
work
across
multiple
organizations
and
individuals.
W
W
So
we've
been
working
actively
on
the
one
data
model,
simple
definition,
format
and
then
also
data
models
from
other
organizations
in
part
on
omae's
spec
works
like
within
to
them
and
keeps
our
models
on
the
datum
or
convergence.
So
we
use
that
one
data
model,
simple
destination
format
to
do
data
and
model
interchange.
W
Now
we
have
a
tool
for
generating
CDL
schemas
for
there
for
the
SDF
language
and
we
can
use
to
all
the
CDL
tooling
for
that
too,
and
that's
a
side
result
of
this
activity.
Now
we
have
also
a
JSON
format
proposal
for
coral,
so
you
can
use
the
use
of
JSON
tooling,
with
your
with
a
color
representations.
W
One
activity
on
the
data
models
was
this
binary
data
extraction?
So
if
you
have
something
that
is
not
easy,
useable,
JSON
format
or
such
you
can
now
use
these
tools
for
extract
json-ld
from
it.
We
have
playground
deployment
available
on
that
that
you
can
post
post
your
data
and
get
json-ld
representations
back
and
the
other
big
one
was
proving
coffee
with
hypermedia.
So,
of
course,
from
the
days
of
hyper
hyper
media,
hypertext,
coffeepot
control,
protocol
x
we'd
be
wanting
to
do
this.
W
Now
we
have
modern
tools
and
protocols
for
this
purpose,
so
you
have
a
carrier,
crate
coffee
machine
reference
scenario,
also
known
as
Karstens
coffee
machine.
You
can
discover
and
describe
your
coffee
machine,
discover
many
options,
make
coffee
selections
and
finally
get
get
some
coffee
brewed.
We
have
now
two
open
source
implementations
that
use
coop
and
coral
to
achieve,
especially
the
first
three
steps.
The
last
one
we're
still
working
on.
W
And
this
is
the
set
of
people
who
are
working
in
our
team.
This
time
we
have
one
new,
first-time
member,
mike
mackool,
and
we
had
two
remote
participants.
If
you
want
to
see
more
information
links,
open
source,
implementations,
etc,
you
can
go
to
our
wiki
page
and
all
the
information
is
there.
Thank
you.
C
C
Okay,
I
see
the
see
the
quack.
X
Okay,
hi-
and
this
is
the
report
from
the
quick
table
where
the
big
table
in
the
middle
somewhere.
We
are
also
the
htv-3
table
because
that's
kind
of
the
same
thing
this
is
our
regular
interrupts
spreadsheet.
It's
getting
pretty
crowded
on.
So
we
had
nineteen
implementations
that
we're
tracking
most
of
them
are
both
client
and
server.
Each
letter
is
a
test.
That's
either
passed
or
not
passed.
X
We
now
have
three
lines.
The
first
one
is
sort
of
the
table.
Stakes
basic
protocol
stuff
second
row
is
quote-unquote
advanced
features
that
it
should
really
be
part
of
the
first
row,
but
they're
not
sufficiently
widely
deployed,
yet
that
we
can
do
that,
and
the
third
row
is
new,
which
is
a
bunch
of
new
tests
that
specifically
test
htv3
compatibility.
X
You
see
bunch
of
white
compared
to
what
I
showed
in
previous
hackathons.
That's
because
the
22
drafts
only
dropped
like
maybe
10
days
ago,
so
a
bunch
of
implementations
basically
didn't
have
time
to
update
yet
so
this
this
should
this
should
change.
But
this
is
the
most
remote
nation
we've
ever
had.
We
keep
adding
new
ones,
so
it's
looking
pretty
good.
Most
of
them
were
here,
a
bunch
of
people
also
send
engineers
specifically
only
to
the
hackathon
and
they're
not
going
to
stay
around
for
the
ITF,
which
is
kind
of
an
interesting
development.
X
So
it
seems
our
companies
find
at
least
more
benefit
in
hackathon
and
the
actual
standards
meeting
we
should
maybe
consider
in
some
form,
and
so
shown
is
a
lot
so
I'm
not
gonna,
spend
too
much
more
time
on
this.
One
thing:
that's
also
new
is
I,
don't
know
what
it
shows
up
like
this,
so
there's
a
general
yang
guy
I
might
can
see
man
I've
done
a
bunch
of
work,
so
I
don't
know
about
you.
X
Researchers,
amongst
you,
probably
know,
NS
3,
which
is
a
network
simulator
and
Jonna
and
Martin
have
worked
on
allowing
you
to
define
an
NS
3
simulation,
so
you
can
define
well-defined
TCP
cross
traffic
or
topologies,
and
then
you
can
plumb
in
actual
Crick
implementation
into
that
topology,
and
you
can
do
congestion
testing,
for
example,
repeatable.
So
it's
kind
of
nice,
it's
kind
of
cool,
it's
it's
early
days
is
the
first
time
we
tried
this.
We
plugged
in
I
think
two
or
three
different
ones.
X
This
is
a
sequence
number
of
plus
order.
The
transfer
people
will
be
very
excited
because
now
quick
starts
to
look
like
TCP.
You
can
look
at
this
graph
and
you
see
what's
going
on,
which
is
hide
before
because
it's
all
encrypted,
so
this
endpoint
corporation,
you
can
generate
plots
like
this.
This
is
from
the
simulator
with
one
of
the
stacks
we're
using
Robin
Marx's
tool.
X
There's
the
logging
format,
it's
being
defined,
called
cue
log.
He
has
tools
to
visualize
cue
lock
into
something
like
this
from
the
bottom.
You
see
like
how
the
RTT
changes
that
quick
things
that
has
over
the
path,
and
then
you
see
that
a
regular
sequence
number
AK
plot.
So
this
is
exciting
because,
finally,
it
means
you
don't
have
to
be
like
the
look
at
the
bits
anymore
in
order
to
understand
what's
going
on
in
terms
of
congestion
control,
so
this
is
very
cool.
Thank
you.
C
L
Okay,
very
good:
can
you
help
us
with
the
slides?
Please
happy
to
just
say
next
slide?
Okay,
thank
you
very
much!
So
breathing
it's
Logan
from
Mauritius
from
the
southwest
home
team,
so
we
are
based
in
Mauritius.
Oh
we've
done
a
bunch
of
work
on
TLS
1.3,
SSH,
SC,
Venu,
dscp
code
point
and
the
ITF
mobile
app
next
slide.
Please
Sotiris
1.3!
L
Our
aim
was
to
get
more
applications
running
on
TLS
1.3,
dscp
ID,
it's
a
new
code
point
but
was
just
I
just
became
an
RFC
and
we'll
be
working
on
into
integrating
but
into
open
source
projects
that
does
the
ITF
mobile
app.
But
we
started
working
on
previous
ITF
when
B
is
the
SCE
drop
but
came
up
recently
and
when
the
last
thing,
but
we
work
on
was
on
duplicating
or
c4
in
SSH.
So
next
slide,
please
so
I'm
TLS
1.3
we've
worked
mostly
with
good
on
base
software
packages,
so
matter
moves.
L
L
When
you
call
point
we
integrated
the
patch,
we
integrated
it
into
net
birth,
repeal
was
sent
to
open
SSL
and
it
was
also
sent
to
enough
tables.
The
other
thing
that
we
worked
on
was
we
go
back
up.
As
I
said,
there
are
links
to
a
screen
shot
and
it's
it
has
improved
compared
to
loss,
ITF,
resin-based
shop,
implementation
of
ssh,
that's
still
running
with
hopeful
and
we
duplicated
but
lost
e.
We
work
on
an
SC
implementation
for
F
to
curdle
in
freebsd,
based
on
paragraph
at
Rodney
and
Jonathan
were
published.
L
It's
still
very
basic,
but
it's
enough,
but
we
can
see
SC
packets
on
the
wire
on
Wireshark.
So
next
slide,
please.
So
what
we
learned
basically
was
open
source
project,
then
to
want
to
wait
for
new
dscp
code
points
to
become
RFC
before
accepting
patches
but
spur
case
for
ssh.
So
we
wanted
to
wait.
Our
c4
in
ssh
is
mostly
fading
away.
It's
mostly
going
away.
We've
not
seen
that
many
key
cases
of
open
source
projects,
still
shipping
with
SCE
is
Jeff
starting.
So
it's
worth
looking
more
into.
That
was
also
over
work.
L
L
We
are
grateful
to
my
sponsor
business,
cos
value
who
hostages
and
visible
links
as
well
to
to
our
ripples
where
we
are
some
our
results
over
SC
as
well
as
we
could
and
for
the
ITF
mobile
app.
We
have
it
via
over
that
link
and
we
also
included
for
ILS
shop
flourish.
So
thank
you
for
everybody
for
listening
to
what
we've
done.
J
Y
Y
So
if
you're
at
a
company
or
an
organization
that
has
the
ability
to
sponsor
we'd,
appreciate
that
and
thanks
to
Nova
flow
for
helping
out
this
time
around,
that
was
great
as
well,
and
thanks
to
all
you
really
the
champions
Ally.
Thank
you
for
having
your
project
welcoming
newcomers.
We
want
to
continue
to
make
this
a
great
experience
for
newcomers
to
not
just
for
those
of
us
who
have
been
working
on
the
standards
for
a
long
time
so
appreciate
those
of
you
who
did
have
new
people
in
your
team
and
you
helped
them
get
started.
Y
That's
just
fantastic
so
and
thanks
for
you
just
paying
attention
to
all
the
presentations
they
were
recorded,
we'll
have
them.
They
were
live
streamed
also.
So
if
you
missed
something,
you
can
go
and
get
it
afterwards.
Lastly,
if
you
have
you
didn't
present
anything,
but
you
still
have
some
useful
results
to
share.
Please
do
upload
your
presentation
to
the
github,
org
or
or
if
you
want
you
can
just
send
it
to
me.
I'll
all
upload
it
for
you.