►
From YouTube: IETF112-HACKATHON-20211105-1500
Description
HACKATHON meeting session at IETF112
2021/11/05 1500
https://datatracker.ietf.org/meeting/112/proceedings/
A
Hello,
everyone,
I
guess
congratulations
on
making
it
to
the
the
closing
of
the
hackathon.
Hopefully
it's
been
a
productive.
A
You
know
week
for
you
probably
a
bit
of
a
tiring
week
as
well,
but
but
if
you
accomplish
some
some
good
things,
then
that
that
makes
it
all
worthwhile-
and
I
know
this
is
this-
is
my
favorite
part,
especially
when
we're
in
the
online
and
in
this
virtual
format,
because
you
know
usually
I
like
to
go
around
at
a
different
table-
see
what
people
are
working
on
kind
of
learn
that
as
the
hackathon's
going
on
that's
a
little
more
challenging
to
do
in
this
format.
A
Face
with
in
gather
and
elsewhere,
but
some
of
you
I
haven't,
really
even
had
a
chance
to
do
that,
so
really
looking
forward
to
to
all
the
presentations
and,
let's
see
we
are
just
about
the
top
of
the
hour.
First
of
all,
is
my
audio
coming
through
okay,
this
is
a
different
computer
for
me
on
a
hotel
network
or
on
a
hotel.
A
Okay,
great
thanks
and
actually
I'm
traveling
for
a
conference,
and
things
are
starting
to
pick
up
a
little
bit
looks
like
there's
a
chance.
Our
next
hackathon
will
be
in
person
kind
of
fingers
crossed
that
we
get
good
news
on
that.
A
We'll
see
how
things
go
so
yeah
I'm
going
to
go
ahead
and
get
started.
This
is
being
recorded,
so
in
case
you
miss
any
parts
of
it,
but
I
figure
yeah
best
to
jump
in.
I
know
it's.
It
could
be
a
very
inconvenient
time
for
some
of
you
so
we'll
we'll
get
going
on
that.
A
A
The
the
whole
goal
with
the
hackathon
is
really
to
speed
up
and
improve
the
work
that
we're
doing
in
the
ietf
to
to
work
on
implementing
standards
as
we're
defining
them
to
get
more
running
code
related
to
those
standards
that
not
only
helps
make
the
standard
better
but
makes
it
a
lot
easier
for
developers
to
go
and
adopt
the
standard
and
implement
and
add
it
add
support
for
it
into
whatever
it
is
they're
working
on,
and
it's
also
just
a
great
networking
opportunity
too,
to
come
together
to
have
people
from
from
different
backgrounds,
with
different
skill
sets,
who
all
have
a
shared
interest
in
these
standards
and
but
maybe
working
on
different
aspects
of
it.
A
The
implementation
of
it,
as
opposed
to
the
definition
of
it
or
perhaps
a
bit
of
both
and
and
it's
really
a
great
time
to
also
engage
with
what
I
would
say
are
often
not.
Your
are
typical
ietf
community
members,
people
who
who
come
from
maybe
from
universities
from
other
sdos
from
open
source
organizations
who
are
aware
of
and
interested
in
the
ietf,
but
maybe
it's
not
something
they
follow
on
a
day-to-day
basis.
But
this
is
a
great
time
for
them
to
come
in
and
work
really
closely
with
with
people
from
the
ietf.
A
We
operate
under
the
note.
Well,
hopefully,
you're
all
familiar
with
this,
but
if
not
you
know
it's,
it's
really
good.
To
take
a
a
look
at
this.
It
covers
our
processes
and
and
how
we
work.
You
can
see
the
bcps
there
at
the
bottom.
You
know
I
think
code
of
contact
is
we
see
a
lot
of
emphasis
on
this.
A
It's
important
to
look
at
that
the
copyright
ip
you
know
disclosures
all
that
type
of
thing
good
to
be
aware
of
what
the
rules
and
what
it
means
to
be
participating
in
the
itf,
so
so
good
to
take
a
look
at
that.
If
you
haven't
already
suggest
you
do
it
asap
the
agenda.
As
I
said,
we've
made
it
to
the
closing
and
after
just
I'm
not
gonna,
spend
too
much
time
going
through
slides
we're
going
to
spend
the
bulk
of
the
time
hearing
what
all
of
you
have
accomplished.
A
I
have
a
screenshot
there
on
the
right
that
I
took
maybe
about
an
hour
ago
and
don't
panic
if
you've
updated
it
like,
if
you've
added
yourself
to
that
list
more
recently
and
you're,
not
in
that
slide,
we're
not
going
to
go
based
off
the
slide,
we're
going
to
go
based
off.
A
What's
actually
currently
on
that
wiki
page
or
I
was
just
messaging
with
alex
in
the
background
he
uploaded
all
the
slides
all
the
presentations
that
have
been
put
in
the
our
project
repository.
Let
me
go
forward
a
bit
if
you
uploaded
your
your
presentation
to
this
repository.
Those
have
been
imported
into
me
techo,
so
we
can
actually
run
through
them
from
me,
teca,
as
opposed
to
you
having
to
share
your
screen.
You
can
drive
it
from
me,
techo,
so
barry
if
you're
up
for
giving
that
a
try.
A
When
we
get
to
that
point,
we
can
try
going
through
and
doing
it
that
way.
So
far,
it's
been
working
great
for
me,
so
I
think
you
may
find
that
an
easier
way
to
go
through
the
presentations
as
well.
A
But
I'm
getting
ahead
of
myself
a
little
bit.
I
also
wanted
to
share
a
reminder
about
the
presentations,
and
hopefully
you
were
all
here
for
the
the
kickoff
and
these
presentations
are
meant
to
be
just
you
know
very
brief
presentations
really
just
covering
what
the
problem
was.
A
You
were
trying
to
solve
what
it
was
you
accomplished,
and
then
just
some
interesting
highlights
about
what
you
learned
things
you're,
going
to
take
back
into
your
working
group
and
perhaps
some
interesting
collaboration
with
other
sdos
or
open
source
organizations
or
kind
of
you
know,
other
people
outside
of
the
ietf
working
group
or
even
if
you
collaborated
with
another
ietf
working
group
that
maybe
isn't
as
deeply
involved
with
the
standards
that
you're
working
on
all
that
stuff's.
Just
you
know.
A
I
think
it's
very
interesting
to
hear
so
now,
if
you
can
do
all
that
in
five
minutes,
that's
fantastic,
and
just
this
is
meant
to
be
a
conversation
starter.
So
don't
feel
like
you
need
to
cover
all
the
nitty-gritty
details
really
just
give
us
that
the
high
level
that
would
be
fantastic.
A
And
we
already
covered
that,
and
so
with
that
we're
going
to
switch
to
just
walking
through
the
presentations
and
barry,
I'm
happy
to
turn
it
over
to
you
and
we
didn't
get
a
chance
to
discuss
this
before
barry.
But
do
you
want
to
just
try
driving
it
from
the
uploaded
presentations
into
me?
Techo.
D
Yeah
I
can,
I
can
do
that
or
each
person
can
write
their
own
using
the
yeah,
the
slides.
Let's,
let's
try
that
so
I'll
call
you
out
and
use
the
the
share
preloaded
slides
option,
which
is
the
little
piece
of
paper
with
the
corner
dog,
ear,
click
that
and
we'll
allow
you
to
share
the
presentation,
and
it
will,
you
know,
pick
your
own
presentation
and
run
your
own
slides.
D
If
you
have
trouble
with
that,
let
me
know-
and
I
will
run
your
slides
for
you,
so
why
don't
we
go
down
in
what
appears
to
be
alphabetical
order
in
the
wiki
so
and
rather
in
the
in
the
github?
So
we'll
start
with
bmwg
and
the
next
one
will
be
the
dns
header
people
so
bmwg.
D
There
we
go
there,
you
go.
E
Ahead,
okay,
so
I
will
be
doing
our
projects
in
his
hackathon.
112
is
a
containerized
infrastructure,
very
much
green,
so
the
main
goal
of
our
project
is
in
continuously
hard.
Carton
theory
is
to
figure
out
control,
networking
performance
in
various
crystal
oxygen
and
the
result
will
be
contribute
to
our
draft
consideration
for
by
measuring
network
performance
in
container
right
infrastructures.
E
Now
in
the
hackathon
112,
our
main
I
mean
a
project
is
to
do
the
very
much
parking
lot
rate
in
multiple
scenario,
with
different
acceleration
technologies
for
2v3,
exactly
bvp
and
obs
dbdk,
and
we
consider
the
luma
proceduration
for
so
to
see
how
it
will
impact
the
performance
of
cp
switch
so
inside
the
head
curtain.
We,
the
scenario
we
consider-
is
a
multiple
scenario,
so
in
our
one
one
one
ietr
package,
so
we
consider
the
single
signal.
E
E
Cl2
will
be
the
poster
chain
with
the
packet
back
into
the
traffic
generator
and
it's
a
similar
way
both
to
cnf
upgrade
in
different
loop
placement
operating
shipping,
incoming
packet,
cnf
and
visibly
considered
so,
and
we
don't
consider
both
choosing
that
in
different
node
with
6k.
But
it
will
degrade
performance.
But
we
have
observed
in
signal
plus
scenario.
E
E
The
thing
that
we
learn
from
this
result
is
the
first
thing
that
in
multiple
scenario,
bpp
are
performed
obs
and
the
second
is
the
result
that
we
observed
from
the
luma
alignment.
The
first
thing
is
the
alignment
of
district
and
x
and
the
element
of
visual
and
snake
is
almost
the
same.
It
plays
on
different
notes
when
we
slightly
dig
the
performance
in
a
higher
packet
size,
and
the
second
observation
is
cnf
and
d
switch.
E
We
also
set
a
separate
inplacement
of
cnf
in
significantly
degree
performance
by
10
to
50
percent,
and
finally,
we
is
a
receiving
bracket
cf1.
This
week
we
observe
the
reverse:
the
reverse
result
for
obvious
dpdk
and
vpp,
with
oft
bdk
cmd
and
this
week.
In
the
same
note,
we
have
higher
performance.
Meanwhile,
in
vip
cnf
and
district
in
different
node,
it
has
higher
performance.
E
D
Okay
hearing
none
thanks
again
tran
good
good
job.
Next,
one
up
is
the
dns
editor
willamah.
F
G
So
I'm
I'm
going
to
try
this
slide
sharing
thing.
Yep,
let's
see.
D
And
the
next
next
step
will
be
i2nsf
so
get
on,
get
ready,
yep,
your
slides
are
up,
go
for
it.
Okay,.
G
Yes,
so
yeah,
so
the
the
dns
hackathon
was
actually
just
a
single
project,
which
I
did
with
someone
else
tonka
park,
my
colleague,
and
so
it's
not
not
really
denas,
but
either,
and
so
what's
this
about
so
roy
irons
has
a
draft
which
has
been
adopted
not
so
long
ago
at
a
dns
working
group
which
builds
upon
extended
dns
errors
extended
dns
errors
is
a
mechanism
in
which
a
resolver
or
authoritative
nameserver
can
send
back
a
detailed
information
about
error
that
occurs
to
the
carrier.
G
So
the
dns
air
reporting
draft
builds
upon
that,
but
instead
of
informing
the
clients
what
went
wrong,
it
informs
the
authoritative
servers
that
we're
serving
the
broken
songs
about
the
error,
and
so
it's
actually
extended
dns
error,
reporting
and
therefore
I
so
I
I
know
that
roy
argens
who's
on
this
draft
and
also
met
larson.
They
don't
like
the
funny
acronym
either,
but
I
I
do
like
it,
but
and
therefore
I
called
it
either.
G
So
this
was
discussed
one
and
a
half
week
ago
during
a
dinosaur
interview
meeting,
and
there
was
what
was
discussed
there.
The
the
mechanism
consisted
that
or
meant
that
resolver
would
send
a
edns
option
to
the
authoritative
saying
hey.
Do
you
have
a
reporting
agent
for
me
where
I
can
send
my
errors,
and
it
was
noted
during
the
interview,
the
interim
meeting,
that
this
breaks
some
authoritative
resolvers.
G
G
We
made
a
program
to
do
this
in
a
beefy
bpf,
and
if
you
haven't
heard
about
this,
yet
it's
not
the
old
berkeley
packet
filter
from
tcpdump.
But
it's
you
know
it's
very
exciting
and
happening.
You
can
run
programs
in
the
linux
kernel
or
very
close
to
the
network
card
or
even
on
the
network
card
hardware,
and
what
I
like
about
it
too.
G
We
have
been
playing
it
with
with
bpf
and
at
in
our
matlabs
is
that
you
can
augment
existing
name
server
installations,
so
we
made
something
which
is
named
server
agnostic,
but
you
don't
have
to
anticipate
it
beforehand.
G
G
And
here
you
see,
example,
so
I
I
have
this
server:
either
dot,
lobs,
dot,
nl
and
it's
running
the
either
domain,
and
I
have
not
anticipated
the
dness
error
reporting
yet,
but
to
do
to
enable
it.
I
just
have
to
do
this
clone
the
repository
initialize,
the
submodule
go
to
the
directory
with
the
option
make
load
and
I'm
done-
and
you
can
see
here
the
result
from
a
dick
query
that
it
is
indeed
reporting
the
reported
agent
in
an
edns
option.
G
G
I
did
a
very
simple
measurement,
so
ripe
atlas
has
11
000
props.
Currently
I
think,
but
for
the
hackathon
now
I
did
it
with
just
a
thousand
doing
it
with
all
of
our
losses
a
little
bit
more
work,
but
it
was
good
to
have
this
experiment
anyway.
I
think
so.
G
962
probes
anticipated
the
program
that
processes
the
results
is
there
in
a
link
to,
and
what
we
learned
was
that
99
of
the
resolvers
don't
have
an
issue
with
this
option
and
that
the
remaining
1
is
also
a
little
bit
unclear,
because
actually
we
had
more
answers
on
the
domain
that
did
send
option
than
on
the
domain,
which
did
not
send
option,
which
was
the
baseline
measurement.
So,
but
I
think
so
from
initial
guess
that
support
is
okay,
but
of
course
you
know
there
needs
to
be.
G
We
need
to
be
do
like
large-scale
measurements
to
to
be
sure
and
be
certain
that
this
can
be
done
for
dns
or
reporting
without
causing
problems.
Another
thing,
what
we
learned
and
more
that's
more
discussed
is
that
this
is
like
dmarc
right.
It
gives
operators
of
authority
of
domains,
dns
operators
serving
authority,
domains
authoritatively
confidence
to
deploy
dnsec
because
they
get
feedback
from
what
is
happening
on
the
internet
if
something
breaks
etc.
G
So
that
might
be
some
interesting
future
work
and
that's
it.
We
just
worked
with
the
two
of
us
on
this
because
you
know
everybody
is
busy
and
it's.
It
works
better
on
in
a
live
meeting,
but
that
was
nice
to
do
anyway,
and
I
think
that
maybe
we
present
this
a
bit
more
elaborately
at
dinosaur
next
week.
F
A
Yeah
I
mean
yeah
thanks
for
the
presentation
I
found,
I
mean
the
the
potential
of
helping
with
the
kind
of
dns
sac
deployment
problem
is.
As
I
understand
it
is
I
mean
it
sounds
really
interesting
and
an
encouraging
thing
to
to
try
out
also
I
was
it
was
interesting
to
hear
you
bring
up.
You
were
talking
about
ebpf
too
right
at
the
beginning,
yeah.
A
That's
prevalent,
I
think,
in
the
networking
space
edpf
gets
used
a
bit
and
now
I'm
hearing
it
used
a
lot
for
dealing
with
security
things
that
just
I
don't
know
it's.
It
seems
like
a
really
neat
kind
of
toolkit,
no
way
to
to
be
able
to
use
so
interesting
to
see
you
using
it.
The
way
you
did
it
makes.
G
Yeah
absolutely
yeah
yeah,
we
sort
of
made
a
toolkit,
so
this
this
uses
parts
of
that
toolkit
to
to
implement
this
this
option
and
it's
cool
yeah,
because
you
know
it's
also
a
feat:
a
feature
can
be
implemented,
independent
of
what
you're
winning
and
it's
very
fast,
very
suitable
for
dealing
with
you
know
of
service
attacks
and
those
kind
of.
D
D
H
Okay,
so
hello,
this
is
john.
This
is
i2
nsf
hackathon
project
report,
so
this
is
a
poster
for
this
project.
Basically,
this
hackathon
project
case
we.
D
D
H
H
H
H
So,
as
you
can
see,
this
is
i2
nsf
framework,
so
I
turn
as
a
user,
giving
the
fallacy
and
then
secure
controller
translate
into
level
policy
for
the
nsf
security
enforcement.
H
This
hackathon
project
case
item
accept
the
analyzer,
collects
the
monitoring
data
from
nsf
using
a
monitoring
interface
and
then
that
monitoring
data
are
stored
into
hyperledger
fabric
as
a
distributed
database.
After
that
secure
controller
has
a
web
server
and
it
display
the
monitoring
data
into
web.
H
So
this
hackathon
project
we
use
the
distributed
database
based
on
the
hyperlaser
fabric,
so
you
can
see
itunes
of
analyzer,
collect,
mounting
data
from
nsf
amb
and
then
using
nested
a5.
That
information
will
be
delivered
to
hyperledger
the
organization
node.
This
is
this
should
distribute
to
the
database
node
and
and
then
using
order
and
confirm
that
stories,
and
then
the
data
information
delivered
to
control
law,
okay
and
then
it
can
display
that
motion
data
into
web.
H
H
After
that
this
distributed
database
receives
that
nsf
modding
data
from
i2
and
sf
analyzer.
So
you
can
see
this
is
a
json
format,
so
right
hand
side.
So
we
display
the
monitoring
data
okay.
So
this
is
the
system
resources
such
as
memory
disk,
okay,
something
like
that
and
then
right
hand
side
is
the
network
traffic.
So
this
incoming
traffic
is
over
that
some
threshold
means
ddos
attack.
Okay,
so
let's
take
time,
we
use
the
centralized
database
with
the
mysql.
This
time
we
replaced
mysql
with
the
hyperledger
as
the
distributed
database.
H
Databases
like
the
blockchain
overall,
the
distributed
database
system,
can
improve
the
security
and
reliability
of
our
itunes
framework.
So
next
step
this
time
we
just
stored
the
i2
nsf
nsf
multi-data,
but
next
time
we
try
to
store
more
data
and
information,
such
as
security
policy
and
nsf
capabilities.
H
H
So
this
is
the
member
information
we
work
for
hackathon
with
the
sunshield
university
bmwg
working
group
and
ipo
web
team.
This
is
my
team
members,
so
also
this
time
you
also
work
for
a
hackathon
project,
so
korean
gathered
in
the
busan
west
in
joseon
hotel.
In
busan,
we
work
for
hakkason,
bmwg,
i2rsf
and
ipwave
team.
So
this
is
the
sponsor
information.
H
D
H
Okay:
okay:
this
is
the
ipo
wave
hackathon
project
report,
so
this
is
the.
H
Hackathon
project
poster,
basically
this
time
we
implemented
the
ip
wave
context.
Over
navigation
protocol
is
called
the
cmp,
so
robot
car
and
the
web
server
can
communicate
with
each
other.
So
we
using
ipweb,
cmp
beaker
information
option
type,
is
delivered
over
a
world
wide
web
double
3c,
pi
ss
feeker
information
service
specification.
We
used,
we
deliver
information
using
that
standardization
format.
H
Also,
we
demonstrated
the
text
last
time
we
used
a
wave
such
as
a
tsrc,
so
european
folks
they
are
popularly
using
simply
takes
a
cellular
feature
to
everything.
So
so,
in
this
case,
we
use
the
simulation
based.
Information
was
done
so
I
said
this
ipwave
hackathon
case.
We
had
two
parts.
First,
one
is
the
simulation.
Second,
one
is
robot
car
based
implementation.
H
H
It
is
called
the
pis
s,
vehicle
information,
service,
specification,
deliver
figure,
signal
or
sensing
information
to
web
server.
So
the
first
part
simulation
case.
We
use
the
sumo
for
vehicle
mobility
simulation
and
the
lower
part
is
the
omf
plus,
with
the
cv2x
module
to
simulate
the
car
communication
based
on
cbtx
okay.
H
So
this
figure
shows
our
information,
implementation
of
omni
plus
plus,
so
we
implement
the
peaker
structure
to
support
the
cv2x.
Also
on
this
is
ipv6
stack,
especially
the
icm
5006
is
for
context
over
navigation
protocol.
Okay.
So
we
used
this
stack
so
last
time
we
used
elto.11
waiver,
okay,
so
at
the
eleven
ocb
stack,
so
we
used
on
this
kind
of
mustache.
H
This
time
we
demonstrated
the
the
feasibility
of
cb
tracks.
H
So
this
is
a
remote
server
and
the
vehicle
so
using
ip
wave
vehicle
mobility
information
message
can
be
delivered
over
w3c
standard,
such
as
a
free
iss
figure
information
service
specification.
Also,
a
general
pre-access
figure
signal
specification,
deliver
vehicle
sensor
or
other
signal.
Information
can
be
delivered
over
viss
format,
so
this
is
also
open
source
project,
so
you
can
get.
The
first
link
is
our
simulation
for
sibu
text.
H
The
second
one
is
a
local
car
based
implementation
card,
so
you
can
click,
and
then
this
a
ai
on
r1,
robotics
load
car,
can
deliver
its
sensing
information
or
figure
speed
information
using
wi-fi
to
a
web
server.
H
This
link
demonstrate
the
civil
text
based
context,
aware
navigation
protocol,
okay
exchange,
so
we
work
for
a
whole
week
with
the
sunshine
university
vm
wg
and
i2
nsf
team
work
together.
So
this
is
a
team
member
information.
So
my
ph
student,
the
pnma
and
my
the
master
student
chun
hee
maekwon.
H
So
this
is
our
korean
hackathon
teams
members.
So
this
is
a
sponsor
also.
We
have
appendix
for
cbu
access
examination
and
also
we
have
index
for
our
local
car
implementation
and
manipulation
for
people
having
interesting
for
our
ip
wave
project.
Thank
you
for
your
attention
and
interest.
D
Thank
you
paul
anybody.
Thank
you.
A
Yeah
hi
paul,
hey
yeah,
thanks
for
great
presentations,
you
always
bring
great
projects
to
the
hackathon,
so
appreciate
that
the
cd
cd2x
that
was
a
new
one
to
me,
which
which
standards
organization
defines
that.
H
Cebu
text
is
30
cpp,
the
third
party
yeah
partners
right
for
5g.
You
know
lt
right
so
recently
the
3gpp
also
working
for
6g,
but
basically
korea
case
we
deployed
5g.
I
believe
the
united
states
also
solving
the
5g
service
right
for
a
cell
phone
right,
mobile
smartphone,
so
the
vehicle
case
also
the
vehicle
to
everything
case.
Cputex
model
4
case
provide
the
adult
fashion
communication,
which
means
the
vehicle
can
communicate
with
without
the
base
station.
Okay.
So
in
that
case
we
can
provide
the
communication
among
beakers
to
avoid
the
accident.
H
I
H
The
united
states
case
the
dsrc
wave
case
they
consider,
but
I
think
two
technology
will
be
co-exist
in
future.
So
that's
why?
If
you
wave
working
group
needed
to
consider
cv
text
for
mac
and
the
file
yeah
protocol
for
hyperwave
navigation
or
other
yeah
application,
yeah
communication
technologies-
okay,
okay,
yeah!
Well,
thanks!
Okay,
thank
you.
Charles.
D
If
not
thanks
again
paul
and
yes
thank
you.
D
H
D
Thank
you
and
oliver
you're
next,
with
the
aspa.
J
Okay,
can
you
hear
me
we
can
wonderful,
okay,
so
that
was
our
first
active
hackathon.
J
What
we
did
I
participated
some
before
as
listening
in,
but
this
time
we
said
that
we
wanted
to
make
one
on
our
own,
so
the
the
work,
what
we
what
we
did
is-
and
I
just
go
right
away
in
here-
so
the
the
we've
worked
since
a
long
time
in
bgp
security
and
we've
worked
on
bgp,
seg
path,
validation,
the
bgb
orange
validation,
and
for
that
we
created
reference,
implementation,
reference,
implementations
and
a
large
software
suite.
J
What
we
know
employ
also
for
another
work
at
the
sider
ops,
working
group,
the
aspa
autonomous
system
provider,
authorization
and
the
goal
of
that
is
of
this
hackathon.
What
we
started
this
time
is
to
lay
the
groundworks
for
future
interoperability
tests
between
different
implementations
and
to
create
large-scale
tests
that
then
can
be
used
to
verify
the
different
validations
performance,
etc,
etc,
etc.
J
A
very
quick
explanation:
how
the
system
in
general
works
so
isps
or
operators
of
autonomous
system
routers
basically
register
their
data
in
the
rpki
database,
and
that
then,
will
be
queried
relatively
often
by
validation,
caches
that
go
out,
download
the
x519
certificates
etc
and
then
create
something
we
can
call
it.
Maybe
a
white
list.
J
J
So
the
tools,
what
we
used
was
basically
our
list:
bgp
srx
software
suite
where
we
have
the
aspa
verification
implemented.
Actually,
we
have,
and
currently
the
draft
is
version
8,
but
there
is
an
algorithm
correction
already
that
was
introduced,
some
iatfs
back.
J
That
will
soon
be
added
to
this
standards
draft
and
we
have
the
implementation
for
that.
The
other
thing
is
the
rfc
8210
is
extended
to
now
also
carry
aspa
objects,
and
we
made
the
reference
implementation
for
this.
J
So
we
wanted
to
use
our
test
harness
to
basically
yeah
run
larger
scale
tests.
So
what?
What
was
our
task
for
this
hackathon?
We
said.
Okay,
we
want
to
go
out.
We
take
sample
internet
scale,
asp
data
and
we
use
the
kata
reference
data
to
specify.
J
J
Then
the
custom
is
and
a
list
of
providers
for
these
iss
and
what
we
then
also
did.
We
went
out
to
route
views
and
got
some
egp
updates.
J
Maybe
follow
another
goal
like
so
what
what
did
we
do?
So
we
went
out
to
the
cater
and
we
got
data.
Actually,
I
think
it's
the
data
from
october
1st
2020
because
currently
cada
is
re,
revamping
their
algorithms
and
do
not
provide
the
latest
internet
data.
But
for
us
it
was
perfectly
fine
because
we
just
said
we
wanted
to
have
something
that
if
we
use
already
the
internet
traffic,
we
wanted
to
have
data
that
is
compiled
out
of
the
internet
topology.
J
Then
we
what
we
did
was
we
wanted
to
down,
select
that
and
we
did
that
where
we
said.
Okay,
if
I
don't,
for
example,
I
take
a
100
routes,
a
thousand
routes,
10
000
routes.
So
what
we
did
we
created?
We
created
tools
that
go
out
and
create
a
unique
as
path.
We
were
not
interested
in
the
prefix,
because
aspa
does
not
look
into
the
prefix.
J
It
tries
to
identify
route
leak
space.
Basically,
so
if
there's
a
route
leak
for
one
prefix
and
most
likely,
it's
also
for
other
ones,
the
the
validation
here
is
basically
on
on
the
path
itself.
So
what
we
did
was
we
down
selected
out
of
around.
J
I
think
it's
currently
around
800
000
prefixes,
if,
depending
how
you
how
you
run
the
b2b
dump
on
the
mrt
files,
you
get
up
to
also
800
000
updates
we
down
selected
that
to
around
100
000,
unique
ones,
and
then
from
there
we
we
said
okay,
if
you,
if
you
create
a
thousand,
we
only
want
to
have
the
cata
data
that
contains
all
asses
within
these.
This
is
within
the
update
data
stream,
so
because
the
other
ones
they
just
would
lie
in
the
system
dormant.
J
Then
we
performed
the
aspa,
validation,
very
quick
explanation
on
the
validation,
what
it
is.
So
if
I
have
a
valid,
then
I
didn't
identify
your
outlet.
If
I
have
an
invalid,
then
I
detected
a
route
leak.
If
the
outcome
is
unknown,
then
I
don't
have
enough
asba
information
to
make
any
determination
either
way
and
if
an
ai
set
is
in
the
bgp
path,
then
it's
unverifiable.
J
That
is
one
of
the
results.
What
I
just
compiled
yesterday
night,
where
we
got
everything
up
and
running
with
all
the
scripts
and
it's
to
be
taken
with
a
little
bit
of
grain
of
salt,
because
it's
a
relatively
small
data
set.
I
think
it
was
around
500
routes
and
depending
it
becomes
much
more
interesting.
I
guess
if
we
go
on
a
higher
one,
but
one
interesting
thing
to
see,
for
example,
is
already
that
we
selected
a
large
scale
isp,
and
we
just
said
okay.
J
If
this
isp
is
my
provider,
how
many,
how
many
prefixes
or
how
many
updates
I
received
from
them,
how
many
paths
are
valid
and
that
is
94
and
how
many
are
invalid.
That
basically
means
result
into
route
leaks,
three
percent
and
another
three
percent.
I
could
not
make
a
determination
now
if
I
turn
that
around
and
say
that
now
my
isp
is
my
customer,
then
of
course
I
I
don't
expect
94
percent
of
valets
and
that's
what
we
definitely
can
verify
here.
So
we
have
14
of
wallets.
J
We
have
we
identify
18
of
route,
leaks
and
68.
The
data
is
not
there
to
to
make
any
distinction
in
either
way.
We
didn't
had
time
yet
to
really
dig
deep
into
the
analysis
of
that
whole
thing.
That
is
something
that
has
to
be
done
now
going
forward,
but
for
this
week
we
basically
wanted
to
create
the
the
infrastructure
that
one
can
start
working
on
that,
and
so
what
is
the
to
be?
What
needs
to
be
done?
Currently,
our
experimentation
runs
with
one
implementation
on
the
test
with
against
one
pier.
J
It
starts
becoming
interesting
running
this
against
multiple
peers,
especially
when
we
go
into
the
performance
testing,
and
hopefully
we
will
see
other
implementations
coming
between
today
and
the
next
hackathon.
So
then,
it
might
be
interesting
to
compare
validations
with
other
implementations
other
than
ours
to
make
sure
that
the
validation
results
are
all
all
the
same
etc.
J
Further
further
analysis
will
be
of
interest
is
gradual
deployment,
because
what
we
did
here
right
now,
we
had
an
almost
hundred
percent
deployment
of
aspa
and
if
you
roll
out
a
new
technology,
that
is
not
what
happens
so.
The
interest
there
is
to
see
at
one
at
what
deployment
rate
start
aspa
becoming
really
making
a
positive
impact
on
detecting
route
leaks.
J
If
you
go
on
github
and
you
just
search
for
bgp
srx,
then
you
will
find
right
away
the
the
software
suite
the
code,
what
we
developed
for
this
hackathon.
J
I
still
want
to
clean
it
up
a
little
bit
before
we
put
it
up
on
github,
but
it
will
be
made
available
as
well,
and
I
didn't
make
a
final
decision
yet
if
you
put
it
on
the
on
the
ietf
hackathon
github
part
or
if
we
also
run
it
in
the
examples
of
ours,
but
depending
what
we
will
do,
we
will
have
it
at
least
linked
from
from
the
nist
github
part,
so
that
you
can
find
the
code,
the
the
scripts
and
and
and
so
forth.
D
I
I
I
The
following
gives
some
of
our
specifications
the
related
drafts
which
which
may
be
used
in
this
project.
So
what
got
done?
The
our
achievement
is
the
antibody
and
automatic
and
the
api
generation
tool
the
generated,
enable
apis
could
be
integrated
into
enable
framework
automatically
and
also
supports
customized
input,
parameter
check
or
functionality
and
by
making
use
of
enable
gene.
We
have
already
successfully
delivered
l3
with
vpn
service
to
have
a
device
through
enable
plan
book.
I
So
the
following
diagram
gives
an
architecture
of
animal
gene
and
the
entire
processing
flow.
The
user
inputs
are
related
young
modules
and
their
customized
api
description.
Xml
files,
these
customized
api
profiles
are
used
to
describe
your
desired
ansible
apis,
which
will
be
called
when
delivering
netcup
message
to
the
targeted
nodes.
It's
very
similar
to
netcover
message,
but
without
values
carried,
it
could
be
an
edit
config
get
or
get
config
or
rpc
operation
defined.
In
your
modules,
the
young,
passer
and
xml
passer
will
check
and
pass
the
young
modules
and
xml
description
files
respectively.
I
The
results
as
the
input
of
adapter,
which
is
responsible
for
generating
the
internal
object
of
animal
gene
and
then
identified
by
the
antibond
module,
great
generator,
which
will
generate
the
angle
apis.
Finally,
the
generated
apis
will
be
deployed
in
anger,
environment
automatically,
which
works
as
an
edible
module.
When
we
issue
a
net
conf
request
through
plan
book,
the
related
apis
will
be
called
and
complete
the
configuration
management
task.
I
So
during
this
hacksaw
week
we
learned
that
the
documentation
is
important
for
involving
others
is
even
as
important
as
code.
We
can't
expect
everyone
to
know
how
to
use
it
just
by
reading
the
code
and
the
second
lesson.
The
second
lesson
we
learned
is
to
test
early
and
often
so
we
can
have
more
opportunities
to
catch,
backs
and
more
time
to
fix
them,
and
also,
I
would
like
to
thanks
to
panglio
tanglio,
viva
and
benson,
for
visiting
our
project
and
providing
a
very
valuable
suggestions
and
great
input.
I
I
B
We
comes
from
a
a
recently
formed
working
group
called
wish,
which
stands
for
webrtc
ingest
signaling
over
https
and
whip
is
the
name
of
the
protocol
that
is
being
standardized
there
and
it
recently
went
to
version
zero
one
which
was
actually
the
the
the
main,
the
main
version
that
we
tried
to
to
test
in
these
in
this
interoperability
test,
and
we
basically
wanted
to
cover
two
specific
aspects.
B
Instead,
which
can
be
a
bit
more
problematic
at
times,
and
we
actually
did
a
few
first
interrupt
rounds
among
a
limited
set
of
implementations
a
few
weeks
ago,
that
I
documented
in
a
blog
post
that
you
can
read
there
if
you
want,
and
at
the
time
we
had
three
clients
and
three
servers
interacting
with
each
other
and
basically
the
results
that
we
got
from
that
test.
That
eventually
were
successful,
basically
helped
us
identify
a
full
set
of
key
issues
in
the
zero
zero,
the
version
of
the
of
the
documents
and
basically
zero
one.
B
Most
of
the
issues
that
were
addressed
in
zero.
One
actually
came
out
of
that
interrupt
round,
which
was
quite
helpful
and
this
time
around,
we
wanted
to
test
more
implementations
instead,
and
so
we
ended
up
testing
four
different
service
implementations,
including
one
that
I
wrote
myself,
one
that
julius
wrote,
galen
sergio
wrote
the
integration
in
the
medical
standpoint
and
cameron
integrated
this
into
his
own
sfu
called
the
cfu
and
for
clients.
We
had
six
different
clients
instead
that
we
could
test
and
both
servers
and
clients.
B
Basically,
almost
all
of
them
used
different
webrtc
stacks,
which
is
why
it
was
quite
interesting
to
see
how
they
fared
with
each
other,
and
the
end
result
was
in
a
nutshell
these,
so
it
was
mostly
successful
and
actually
more
successful
than
the
picture.
There
suggests,
basically
just
to
give
you
a
quick
understanding.
A
green
smiley
means
that
everything
worked
fine
out
of
the
box.
B
A
yellow,
yellow
smiley
means
that
we
had
to
tweak
something
in
other
client
of
server
to
get
something
working
red
meant
that
nothing
worked,
and
so
further
investigation
is
needed
and
no
arrangements
that
we
couldn't
test
due
to
something
that
was
actually
unrelated
to
whip
itself.
So,
like
course,
issues
or
stuff
like
this,
and
I
try
to
summarize
most
of
these
reports-
the
different
results
in
these
slides.
B
I
will
I
will
not
go
very
much
in
detail
in
all
of
these
wrappers
because
they
might
take
more
than
the
five
minutes
that
I
have,
but
just
to
give
you
an
idea.
For
instance,
my
client
died
when
you
try
to
talk
to
this
to
the
deadliest
a
few
web
server,
because
it
made
some
assumptions,
and
so
actually
some
assumptions
that
were
made
out
of
the
documents
were
because
of
some
of
the
problems
that
we
experienced.
B
I
should
clarify
that
in
this
slide,
I
meant
I
I
put
weepy,
which
is
the
the
raspberry
pi
based
client,
that
team
wrote
that
basically
didn't
succeed
with
neither
galen
and
the
decipher-
and
this
was
true
up
until
a
few
hours
ago,
and
the
cause
was
basically
a
nice
interrupt
issue
between
the
team's
stack
and
the
pi
on
stack,
which
is
go
based
and
eventually
team
managed
to
find
out
what
the
issue
was,
which
was
actually
more
related
to
the
fire
to
fire
release
rather
than
ice
itself,
and
he
also
had
to
address
again
an
assumption
that
the
pion
stack
made
with
respect
to
the
usage
of
mid
in
webrtc,
which
again
was
an
interesting
side
effect
of
this
test.
B
That's
that
helped
helped
us
identify
a
webrtc
related
issues
rather
than
a
weak
issue,
and
then
there
were
some
other
issues
related
again
to
either
to
assumptions
or
hard-coded
assumptions,
because,
for
instance,
some
clients
couldn't
talk
to
some
service
just
because
each
of
them
were
hard-coded
to
use
one
codec
rather
than
another.
And
so
in
this
case,
this
isn't
really
a
failure
in
how
to
how
they
use
whip,
but
mostly
a
failure
in
actually
ended
up
into
a
successful
negotiation
between
the
two.
B
Maybe
because
the
client
only
offers
bpa
decline,
the
server
only
accepts
h264,
and
so
you
end
up
with
a
session
that
will
not
work
basically,
and
apart
from
this,
it
was
mostly
issues,
for
instance,
related
to
self-signed
certificates,
not
being
accepted,
which
made
it
a
bit
harder
to
do
tests
locally.
These
sort
of
things
and
a
few
other
assumptions,
like
course,
issues
or
missing
support
for
better
tokens,
which
is
something
that
we
actually
want
to
look
into
sooner
or
later,
and
really
to
summarize
what
we
wanted
to
really
focus
on
in
this
first
first.
B
This
first
round
was
basically
really
focusing
on
the
basics
so
trying
to
get
the
webrtc
stream
to
be
published,
using
with
possibly
tweaking
trickling
candidates
and
checking
whether
or
not
I
know
your
video
stream
is
actually
consumable
on
the
other
end.
So
we
didn't
want
to
really
push
it
beyond
that,
mostly
because
this
was
the
first
time
we
all
met
together
in
order
to
to
try
and
do
a
larger
test
among
among
each
other,
and
what
we
really
want
to
test.
Next
is
actually
going
a
bit
further
than
that.
B
So,
for
instance,
making
sure
that
the
authentication
using
tokens
actually
works
as
specified
in
the
documentation
how
to
properly
use
the
location
and
link
headers
to
to
address
whip
resources
and
automatically
configure
stun
and
turn,
and
this
would
actually
be
useful
to
do
in
environments
that
are
actually
more
limited
in
terms
of
actually
setting
up
a
peer
connection
than
the
more
open
setups
that
we
had
right
now.
Addressing
also
ice
restarts
and
related
risk
conditions,
which
is
something
that
we
are
actually
discussing
right
now
in
the
working
group.
B
And
it's
something
that
we
haven't
had
the
time
to
actually
look
into.
As
as
far
as
the
different
implementations
are
concerned,
plus
some
issues
related
to
sessions
cleanup,
which
is
which
isn't
always
done
properly
by
your
clients
and
service,
which
means
that
you
can
end
up
sometimes
with
door
front
sessions
and
things
like
this.
So
it's
definitely
something
else
that
we
need
to
to
address
in
in
future.
Interrupt
tests,
and,
to
conclude,
this
is
the
the
interrupt
testing
team
that
worked
that
was
together
in
these
past
few
days.
B
So
sergio
is
actually
the
whip
champion
for
this.
For
this
hackathon,
then
there
was
me
there
was
tim
panton,
then
jenkins,
cameron,
elliott
and
alberto
gonzalez
stoic
as
well.
I
don't
know
if
there
is
any
question
for
me.
A
So
yeah,
thanks
for
the
presentation,
I
have
a
question
for
you,
which,
which
video
conferencing
tool
did
you
use
for
your
team
to
collaborate.
B
Well,
actually,
yeah,
that's
the
funny
thing
is
that
webrtc
developers
tend
to
tend
to
communicate
in
real
time
as
as
as
as
low
as
the
community.
The
real-time
communications
tend
to
be
as
low
as
possible.
So
we
try
to
use
mostly
synchronous
communications
like
we.
We
might
engage
on
matrix
or
twitter
or
any
kind
of
messaging
platform,
so
it
was
mostly
this
kind
of
interactions,
rather
than
having
a
conversation
like
we're
having
right
now,
we
tend
to
be
quite
lazy
and
just
just
chat.
Instead
of
talking
to.
A
Each
other,
okay
and
just
one
other
thought
it'd
be
interesting
on
your
your
slide.
Some
of
the
problems
you
identified
you
you've
already
fixed
so
to
have
a
I
mean,
I'd,
put
like
an
extra
big
smiley
face
on
those
the
ones
that
yeah
that's
true,
yeah
yeah,.
B
I
had
some
problems
trying
to
actually
frame
this
one,
because
the
yellow
ones
should
actually
be
a
bigger
smile,
because
it's
an
issue
we
fixed
rather
than
something
that
we
had
to
fix,
and
so
it
was
a
bit
more
of
a
frowny
face.
So
it
may
be
more
of
a
messaging,
a
pure
communication
on
my
side,
so
I'll
definitely
fix
this
next
time.
Thanks
for
the
feedback.
D
Okay,
thanks
again
lorenzo
and
steven
you're
up
next
with
augmented
packet,
header
diagrams.
K
K
On
the
other
side,
we
are
also
considering
tooling,
that
derives
meaning
from
draft,
so
that
might
be
using
natural
language
processing
run
over
drafts
and
documents,
or
it
might
be
parsers
for
these
structured
languages
here
and
in
terms
of
what
we
want
to
do
with
the
documents.
We're
thinking
about
things
like
automatic
parser
code
generation,
so
generating
parser
code
for
the
protocols
that
the
documents
describe
automatically
from
the
documents
themselves
or
producing
mathematical
proofs
that
demonstrate
that
the
document
describes
a
protocol
or
something
that
is
provably.
Correct.
K
As
I
say,
this
is
quite
broad
and
we're
really
trying
to
be
as
inclusive
as
possible
for
all
of
these
techniques
and
tools
and
languages.
But
this
week
we
worked
on
two
main
projects,
as
I
say,
mark's
going
to
discuss
his
work
on
computer
specifying
but
I'll.
Just
briefly
update
you
on
our
work
with
augmented
packet,
header,
diagrams
format,.
K
So,
just
to
briefly
introduce
this
format,
we
found
that
most
documents
that
are
specifying
protocols
do
so
with
a
sort
of
broadly
similar
format,
so
you
can
see
in
the
right
hand
side
here.
You
know
we
have
this
ascii
packet,
header,
diagram
and
then
below
that
we've
got
this
description
list
of
each
field
describing
the
length
of
the
field
and
its
contents.
K
Of
course,
there's
a
lot
more
information
about
all
of
this.
We've
got
a
draft
that
specifies
the
sort
of
rationale
behind
what
we're
trying
to
do
with
the
format
and
that
describes
the
format
itself.
The
draft
also
has
pointers
to
the
github
repositories
with
all
of
our
prototype,
tooling
in
it,
and
any
contributions
and
any
comments
on
it
is,
are
more
than
welcome.
K
K
And
finally,
in
terms
of
our
prototype
code,
we
started
working
upon
adding
flexibility,
so
adding
more
languages
that
we
can
produce
parsers
in
and
adding
robustness.
K
So
we
want
our
tooling
to
be
able
to
to
run
over
every
draft,
even
if
it
doesn't
include
our
format
and
we've
really
started
working
on
adding
that
robustness
and
in
terms
of
what
we
do
next.
That's
that's
the
sort
of
two
strands
that
we're
going
to
pick
up
on
as
we
move
forward
with
it.
K
So
all
in
all,
it
was
quite
a
productive
week,
a
lot
of
good
discussions
with
people
that
that
haven't
been
engaged
with
the
work
previously.
So
it's
been
really
good
to
see
some
new
people
and
thank
you
to
everyone
that
came
along
to
the
project
table
and
discussed
their
work
with
us,
really
appreciate
it
and
I'll
take
any
questions.
Anyone
has.
A
Yeah
great
presentation,
one
question:
if
I
wanted
to
try
this
out
on
a
draft
that
I
have
or
even
an
rfc-
or
I
guess
probably
on
a
draft
right,
because
I'll
probably
have
to
change
some
things
or
yeah
but
anyways.
Where,
where
can
I
go
to
get
kind
of
a
prototype
version
of
this.
K
The
the
best
bet
is
to
to
find
this
draft
on
the
data
tracker
and
we've
got
links
to
the
the
github
repository
at
the
bottom
of
the
draft,
and
so
all
the
tooling
is
available.
It's
the
tooling
itself
is
written
in
python.
The
instructions
should
be
hopefully
reasonably
clear,
but
if
there's
anything
at
all
that
comes
up
as
you
try
to
use,
it
then
happy
to
take
any
any
feedback
or
any
comments.
F
Okay,
all
right,
okay,
hi,
so
my
name
is
mark
pettigoner,
as
stephen
said,
participated
in
the
project
on
umbrella,
which
is
about
machine
readable
specifications.
So
this
will
be.
I
will
start
with
a
quick
introduction
of
what
computer
specifying
is
because
I
think
that
there
is
less
than
five
people
on
the
planet.
Who
knows
what
I'm
talking
about
so
he
started
a
long
time
ago,
and
the
goal
of
this
project
is
to
ensure
that
example
in
rfc
are
correct,
because
sometimes
they
are
not
and
because
programmers
generally
look
at
example.
F
So
basically,
a
computer
specification
is
a
document
which
is
written
in
the
format
which
is
called
ascii
doc,
which
is
a
which
is
not
a
markdown
but
looks
like
one,
and
the
big
advantage
of
this
format
is
that
it's
extensible
without
having
to
modify
the
code
as
a
xml2fc
hondura
is
in
fact
provided
by
the
metanormal
project
that
some
of
you
may
may
know.
So
I
wrote
an
extension
well
extensions
to
ask
askidok
that
permits
to
add
code
inside
a
document
in
the
same
document
that
will
be
your
internet
draft
or
your
lfc.
F
F
So
the
idea
is
that,
on
top
the
block
on
top
is
a
computer
right
specification
right.
You
have
code
which
you
can
recognize
because
there
is
a
greater
than
which
is
called
a
bird
mark
on
the
left.
So
this
is
code
and
then,
underneath
this
you
have
text
where
you
have
this
code
macro
code,
column
and
and
the
code
in
between
bracket
which
will
be
evaluated
and
the
result
the
result
inserted
into
the
generator
text.
F
So
in
the
middle
you
have
the
command
which
is
used
to
to
do
that,
and
there
is
one
command
that
does
everything
and
it
generates
xml
to
lfc
text,
html
and
pdf
format.
For
this
document
and
on
the
bottom,
you
have
the
result,
what
will
appear
in
in
the
actual
document-
and
you
can
see
that
the
macro
is
replaced
by
the
values
that
are
in
fact
calculated
instead
of
being
filled
manually
by
the
author
of
the
hrc.
F
F
You
can
have
verification
tools
that
verify
that
the
example
are
correct
or
you
can
go
the
less
easy
way,
which
is
to
generate
examples
that
are
already
correct
by
construction
right,
so
you
still
can
verify
them,
but
if
you
forget
to
verify
them,
they
are
still
going.
The
problem
is
to
do.
That
is
that
you
cannot
use
any
programming
language
right.
F
All
programming
language
are
equivalent
to
each
other,
except
for
their
type
system,
and
in
this
case
we
need
a
type
system
that
can
encode
higher
order
logic
right,
which
there
is
probably
free,
programming
language
that
can
do
this
and
the
one
that
shows
this.
It
too,
which
was
designed
at
the
university
of
saint
andrews
in
scotland.
F
So
I
am
done
with
the
computer
specification
now.
What
I
am
working
on
is
to
provide
some
library
in
idris
to
provide
to
solve
common
problems
that
internet
drafts
also
would
have.
Why
and
I'm
working
on
a
lot
of
this
stuff
with
one
of
the
simpler
and
most
accessible
to
everyone
is
a
b
and
f
right.
A
lot
of
internet
of
rfcs
contain
abnf,
so
you
can
see
on
the
bottom
an
example
of
in
this
case
how
this
library
module
works
right.
F
So
I
design
the
domain
specific
language,
a
dsl
that
can
be
used
to
define
a
b
and
f
grammar
right.
So
here
I
defined
alpha,
which
is
probably
the
most
well-known
rule
for
abnf
and
then
here
on
the
top.
I
insert
the
the
serialization.
If
you
want
of
this,
and
the
result
of
the
serialization
is
what
you
have
underneath,
so
not
only
it
print
a
correct,
syntactically,
correct
string,
but
it
also
format
it
right
it.
It
does
pretty
print
printing.
F
So
if
there
is
not
enough
size
in
the
line
it
it
will
wrap
up
the
line
and
and
do
the
white
thing
according
to
analysis,
52
34.
But
this
is
not
enough
to
generate
an
example.
Remember.
My
goal
is
still
to
generate
an
example
which
is
correct,
so
my
first
idea
was
to
generate
example
from
an
ibnf,
but
this
is
a
terrible
idea
and
the
reason
for
that
is
that
an
abnf
cannot
describe
all
the
constraints
that
you
have
in
the
pdu.
F
It
is
supposed
to
describe
right,
and
I
found
a
really
nice
example-
nice,
because
it's
shorter,
it's
some
symbolic
expression
that
everyone
who
wrote
lisp
code
should
have
seen
right,
and
this
is
a
very
simply
of
a
version
of
which
sx
used
by
basically
from
machine
to
machine
transfer
right
here.
If
you
look
at
the
second
rule
token,
you
can
see
that
there
is
a
number
a
column
and
then
a
number
of
octa
octet.
The
problem
is
that
the
number
of
octet
is.
F
F
What
we
want
is
to
be
sure
that
an
example
with
generate
is
also
correct
according
to
the
ibbnf,
so
the
right
way
to
do
that
is
to
generate
a
type
for
a
symbolic
expression
and
eventually
a
dsl
that
make
it
easier
to
do
so.
Sx
are
easy
are
simple,
so
the
the
code
fits
inside
the
the
slides
the
slide
right,
but
how
to
be
sure
that
the
printable
value
of
that
type,
which
is
an
example,
is
correct
according
to
the
ib,
that's
that
that
was
what
I
worked
during
this
this
week
at
the
academy.
F
So
the
solution
was
was
quite
simple.
Now
that
I
know
what
it
is
one,
so
the
idea
is
to
to
to
build
a
proof
that
this
string
is
correct
right,
and
this
is
why
we
need
things
like
idris
and
that
are
not.
We
need
to
add
to
use
a
programming
language
like
idris,
because
this
is
the
only
one
that
can
build
a
chip,
a
type
which
is
actually
a
proof
right.
So
here
we
are
saying
that
we
can
build
an
instance
of
valid.
Of
the
type
valid.
F
So
the
last
thing
that
one
need
to
do
is
to
write
a
conversion
between
the
type,
which
is
a
perfect
thing,
and
this
right-
and
this
is
the
the
the
the
thing
that
you
see
at
the
bottom-
it
shows
the
type
of
usually
the
implementation
is
missing
and
will
not
fit.
F
But
the
idea
is
that
we
transform
an
instance
of
something
of
type,
a
symbolic
expression
into
a
list
and
into
a
proof
that
this
list
is
valid
for
the
grammar
sx,
which
is
on
the
on
the
right
with
a
small
s
right
and
then
it's
simple.
You
just
have
to
insert
the
result
of
all
of
that
in
your
text
right.
So
this
is
how
it's
done.
You
have
of
our
function.
Example.
F
We
use
a
dsl
to
to
to
define
our
symbolic
expression
and
it's
automatically
verified,
and
it
generates
the
text
that
you
see
on
the
bottom,
which
cannot
be
an
invalid
symbolic
expression.
This
is
impossible
unless
you
tweak
your
compiler,
so
this
is
all
I
have.
This
is
thanks
to
a
a
lot
of
this
discussion
this
week
with
stephen
colleen
and
robert
and
gene
and
luke
lucas,
you
can
find
the
all
the
documentation
and
all
the
links
are
in
the
spec,
which
is
an
internet
draft,
even
access
to
the
tooling.
F
D
C
C
J
C
Perfect
okay,
so
we
worked
on
on
a
a
little
bit
broad
topic
in
some
sense
on
iod
security
and
what
that
meant
for
us
was.
We
wanted
to
implement
specifications
developed
in
these
different
iot
security
working
groups
and
we
had
been
doing
hackathons
on
those
topics
already
for
a
while.
C
So
we
have
a
pretty
good
code
base,
I
would
say,
but
needless
to
mention
that
there's
obviously
a
lot
of
continuous
development
in
those
groups
and
on
some
of
the
specifications
are
not
finalized
and
we
try
to
integrate
integrate
them
to
each
other.
Like
the
core
groups,
the
core
group
develops
co-op
and
it's
protected
using
ddls,
and
then
there's
firmware,
updates,
using
suit
and
so
on,
and
so
on.
C
Many
other
groups
sort
of
fall
into
that
bucket
and
what
we
also
wanted
to
do,
like
we
did
in
previous
hackathons,
was
to
offer
tutorials
to
help
new
participants
get
up
to
speed
a
little
bit
faster,
because
some
of
the
development
is
embedded.
Development
is
complicated
and
there's
focus
on
low
level
programming.
So
we
thought
that
that
would
be
useful
again.
C
Of
course,
we
to
make
it
more
exciting.
We
prepared
new
presentations,
new
topics
and-
and,
as
I
mentioned
on
this
slide,
we
managed
to
hold
all
the
tutorials,
the
slides
and
the
recordings
are
available.
Some
people
asks
for
it
because
they
like
time
zone
differences
and
other
commitments
made
it
difficult
for
everyone
who
was
interested
to
participate.
C
We
also
managed
to
write
some
code,
of
course,
such
as
sort
of
extensions,
iot,
specific
security,
extensions
to
tls
and
worked
on
empty
dls
code
also
code
for
wireshark
was
developed
and
and
released,
and
I
think
it's
already
merged
into
wireshark.
C
So
you
may
be
able
to
benefit
from
this
when,
if
you
use
wireshark,
we
also
enhanced
the
an
open
source
implementation
called
backhammer
that
is
used
for
as
a
lightweight
m2m
client,
lightweight
m2m
is
a
device
management
solution
that
uses
co-op
and
a
variety
of
extensions,
so
that
was
extended
with
security
support,
bsa
crypto
and
the
connection
ids
that
I
mentioned
previously
and
so
on.
C
C
So
they
did
the
deep
dive
on
on
those
and
how
to
and
figure
out
on
how
to
integrate
their
favorite
projects,
and
I
expect
this
to
continue
this
work
to
continue,
of
course,
after
this
week,
so
we
may
have
some
results,
also
in
the
next
couple
of
weeks
or
months
before
we
meet
again
at
the
next
hackathon.
C
C
You
can
probably
going
to
be
discussed
at
the
meetings
next
week,
so
I
skipped
that
and
what
did
we
learn?
We
made
some
progress
on
the
hacking
site.
Again.
Most
of
us
have
been
at
numerous
events
already
and
so
there's,
obviously
a
lot
of
experience,
and
quite
fluent
already,
with
with
various
different
projects
and
embedded
artists,
etc.
C
We've
again
managed
to
identify
some
open
issues
with
the
specs
and
I
believe
the
tutorials
based
on
the
participants
were
useful
again,
I
I
enjoyed
it
listening
to
what
the
other
people
are
up
to,
and
we
also
had
good
discussions
not
only
during
the
tutorials,
but
also
during
the
week
in
in
general,
which
I
think
is
a
big
plus
now
that
we
are
all
traveling
less
and
have
fewer
contacts.
These
type
of
events
are
really
enjoyful.
C
C
What
we
were
used
to
like
sitting
together
in
a
room
and
having
full
attention
to
the
programming,
isn't
quite
there
when
you
have
all
sorts
of
other
activities
going
on,
and
so
that
makes
it
makes
it
really
challenging
to
make
good
progress.
I
don't
know
if
other
teams
had
similar
challenges,
but
but
we
definitely
had,
and
so
I
I
think
we
tried
to
make
the
best
out
of
it
and
and
I'm
happy
that
we
organized
this.
C
I
also
promised
all
the
participants
of
our
hackathon
to
send
them
hardware,
thanks
to
our
bureaucracy,
that's
going
to
be
delayed
and
I
will
only
send
them
the
hardware
in
the
next
few
weeks,
but
it's
still
maybe
a
takeaway
that
is,
besides
the
experience
and
and
the
code
progress,
there's
something
to
hold
in
your
hands
this
one.
We
had
a
bunch
of
different
folks
participating
two
newcomers
who
joined
a
number
of
people
who
we
had
already
worked
with.
C
So
there
was
a
lot
of
familiar
faces
in
our
in
our
club,
so
to
speak,
so
I'm
happy
that
they
keep
coming
again
and
again
from
hackathon
to
hackathon.
So
if
there's
one
takeaway
for
you,
if
you
care
about
iod
security,
you
might
want
to
have
a
look
at
the
tutorials
and
and
the
link
is,
on
the
right
hand,
side.
So,
yeah,
that's
all
all
from
my
side.
C
C
Okay,
I
thought
I
did
sorry
if
I
who
knows
where
I
uploaded
them
to.
D
A
Has
one
he
he
uploaded
and
benson?
I
can
share
your
slides.
If
you
want,
are
you
able
to
join
us
through
audio
and
speak
to
him.
A
A
I
yeah
it
won't
I
get
an
error
each
time.
I
try
to
share.
L
Okay,
I
can
most
people
see
the
link,
I
guess
the
slides
online,
so
you
can
advance
them
yourselves,
I'll
paste
it
in
the
chat
again
and
then
I'll.
L
D
D
All
right
so
just
say
next
when
you're
ready
to
go
to
the
next
one
dancing.
L
Okay,
thank
you
very
much,
so
this
is
kind
of
the
usual
topic.
Maybe
one
text
api,
but
a
lot
of
the
communication
security
have
relies
on
its
standards,
so
move
to
the
client.
L
So,
if
there's
some
kind
of
standard
tax
api,
this
might
help
people,
particularly
if
they
can
register
in
one
place
and
be
able
to
sell
in
many
others.
So
next
slide.
L
L
L
And
the
north
america
and
europe
so
not
so
much
for
african
and
asian
countries
and
so.
B
L
L
It
provides
you
an
easy
way
to
find
this
competition
of
the
restoration
problem,
but
it
still
gives
you
a
way
to
get
localization
get
the
tax
id
right
registered.
So
if
you're
a
business,
you
can
actually
reclaim
tax
on
purchases
and,
as
you
say,
it
covers
mostly
the
us
and
europe
and
canada.
L
So
we
want
to
be
able
to
get
the
tax
rates
and
typical
things
that
you
might
want
it
to
give.
You
is
an
identifier,
a
text
type,
whether
it's
active
inactive,
a
region
code,
a
percentage
and,
if
there's
a
state
or
region,
in
addition
to
the
country
and
some
information
about
last
updates.
L
Just
so
that
you
know
what
you're
using
is
current.
So
that's!
What's
in
this
stripe,
api
one
could
use
this
as
a
basis
for
extension.
So
next
slide.
L
L
Relevant
rfcs
in
particular,
is
5280,
so
that's
used
in
both
3d
secure,
which
is
kind
of
commonly
used
and
also
secure,
electronic
transaction,
which
was
proposed
but
hasn't
seen
much
adoption
though
it
looks
like
it
would
be
easier
to
modify
to
include
taxes
in
in
the
workflow
that
they
have
for
secure
electronic
transaction
next
slide.
Please.
L
So
what
next
we
want
to
try
and
do
more
complete
api
specification
at
examples
where
we
have
direct
links
to
tax
authority
for
payments,
so
that
this
is
can
be
automated
and
need
to
consider
security
and
privacy
aspects
in
more
detail.
So
some
of
the
rfcs
that
are
being
considered
could
possibly
be
implemented
in
the
standard
that
would
allow
this
to
operate
efficiently
and
then
try
implementations
in
other
languages
just
to
see
that
it's
easy
to
generate
this
next
slide.
D
Okay,
thanks
again,
I'm
glad
we
were
able
to
get
your
slides
up.
D
A
Well,
thanks
everyone
for
all
of
those
great
presentations
and
we
will
just
go
ahead
and
briefly
wrap
up.
Then,
in
addition
to
all
the
great
work
that
all
of
you
did
it's,
I
want
to
give
a
big
thanks
to
cnnec
for
sponsoring.
This
hackathon
really
appreciate
the
running
code
sponsors
that
we've
had,
especially
through
these.
A
You
know
when
we've
had
to
do
it
online.
I
think
it's
a
it's
it's
great,
that
we
have
the
sponsors
there
and-
and
I
hope
that
we
continue
to
have
a
good
flow
of
sponsor.
So
if
this
is
something
that
interests
you,
it's
called
the
running
code
sponsorship,
there's
information
about
it
on
the
wiki,
sorry,
not
on
the
wiki,
but
on
the
ietf
web
pages.
A
If
you
have
a
hard
time
finding
it,
I
should
have
included
a
link
here,
but
I
didn't,
but
I
you
know
I
can
get
that
information
to
you,
but
but
definitely
big
thanks
to
c
and
nick,
and
so
the
next
ietf
meeting
is
the
dates
are
set.
We
don't
know
yet
whether
it's
going
to
be
in
person
or
virtual,
if
it
is
virtual,
we'll
run
it
the
same
format
as
we
did
this
one
for
the
full
week
before
the
before
ietf113.
A
If
we
do
manage
to
meet
in
person,
we
will
go
back
to
our
format
of
having
it
on
the
weekend
before
so
stay
tuned.
For
more
information
about
that,
we'll
see
what
ends
up
happening
and
one
way
or
another,
we
will
have
another
hack
thumb.
D
And
you
have
had
the
face-to-face
hackathons.
We've
had
plenty
of
remote
participation
in
them,
so
that
will
continue.
A
Yeah
definitely,
and
when
you
know
some
of
these
great,
you
know
improvements
in
me,
techo
and
some
of
the
other
tooling
are
just
going
to
help
us
out
with
have
even
better
remote
participation.
J
A
Another,
but
you
can
join,
we
as
barry
was
mentioning.
We
typically
do
have
a
number
of
people
who
participate
remotely
and
just
by
either
and
sometimes
bringing
their
own
projects,
as
as
you
did
for
this
hackathon,
so
sometimes
the
champions
and
the
project
teams
actually
remote.
A
J
One
thing:
for
example,
we
used
very
much
the
gather
tool
to
have
our
meetings
within
gather,
even
though
we
could
have
used
something
else,
but
we
thought
okay
in
case
other
people
might
be
interested
in
joining
in
that
would
give
them
the
opportunity
to
just
pass
by.
So
that
was
my
maybe
the
background
for
my
question.
If,
if
a
hybrid
hackathon
is
planned,
would
then
the
tools
would
gather
still
be
available
or
would
it
not.
D
Yeah,
I
think
we're
going
to
be
using
gather
for
hybrid
meetings
for
the
foreseeable
future,
so
I
I
would
expect
that.
D
A
Let's
see-
and
I
think
that
is-
is
it
in
terms
of
the
slides?
That's
it
in
terms
of
the
hackathon
huge
thanks
to
to
barry
thanks
for
running
us
through
the
presentations
for
me,
I'm
not
on
my
usual
computer.
So
I
think
that's
why
I
ran
into
problems
when
I
tried
to
share
those
slides.
A
D
A
Well,
it's
it's
thanks
for
that.
It's
great
incentive,
though,
when
I
see
all
the
the
wonderful
work
that
gets
done.
So
it's
easy
to
motivate
myself
to
put
these
things
together
and
yeah.
The
the
secretary
and
a
lot
of
other
people
do
a
lot
of
hard
work
to
make
this
happen
too,
and
and
and
focus
on
me
techo
and
everything.
So
I'm
glad
it
worked
out
as
well
as
it
did,
and
we'll
have
a
successful
hackathon
next
time
too,
either
in
person
or
or
online.
A
So
thanks
to
everyone
I'll,
let
you
go
and
have
a
great
weekend
and
great
week
of
ietf
meetings
next
week.