►
From YouTube: DASH Workgroup Community Meeting July 13 2022
Description
Docker Diet PR & improvements
Discussion of PR to build SAI Thrift Server
PR75 review
A
I'll
go
ahead
and
share
my
screen
and
show
what
we
talked
about
last
time,
and
so
last
time
we
had
a
q
a
by
keysight
to
go
over
the
the
build
work
and
the
improvements
that
chris
had
put
forward,
and
we
did
a
documentation
reminder
here
of
the
different
documents
we
have
for
reading
and
for
today.
A
So
we've
had
a
couple
of
different
meetings
throughout
the
early
part
of
the
week
where
we've
talked
about
the
the
ptf,
the
packet
test
framework
and
the
testing
arm
of
what
we're
looking
into
doing
and
chris
did.
You
have
something
you
wanted
to
show
today.
B
No,
we
we
decided
to
hold
back
on
that
until
things
settled
down
a
little
more
thanks.
A
Yeah,
not
necessarily
a
demo,
but
did
could
you
reiterate
the
improvements
you
made
over
the
last
week
and
a
half
to
the.
B
B
Okay,
I'll,
let
me
share
screen
here
moment.
B
I'll
also
just
kind
of
review
some
of
the
pr's
that
that
are
of
interest
to
me
just
to
get
a
discussion
going
about
them.
I
have
this
pull
request
here,
which
I'll
describe
in
a
minute.
I
call
docker
diet
where
I
I
worked
on
some
of
the
docker
file
optimizations
that
I'd
listed
in
to
do's
in
the
in
the
dash
pipeline
readme,
and
I
decided
well
I'm
going
to
tackle
them
myself.
B
So
I
spent
quite
a
bit
of
time
splitting
a
large
docker
file
into
several
smaller
ones
and
the
total
image
sizes
in
the
2.5
to
2.7
gig
range,
which
used
to
be
12.5,
so
a
lot
of
a
lot
of
improvements
there
and
it
results
in
cutting
the
ci
pipeline
runtime
by
50,
because
so
much
time
was
spent
just
pulling
the
docker
image
to
even
start
start
the
pipeline.
So
that's
a
big
improvement
and
you
know
if
anyone's
interested
in
how
it
did
that
want
to
get
into
docker
file
nuances.
B
You
know,
maybe
we
can
do
a
deep
dive
someday
on
that
people
are
curious.
It's
just
standard
techniques.
This
is
waiting
to
be
accepted.
B
This
right
here
needs
to
be
merged
and
I
think
we
might
be
waiting
for
some
more
reviews
of
that.
So
it
would
be
nice
if,
if
this
could
get
its
final
review
and
merge
soon,
I
could
merge
this
first
but
kind
of
like
to
do
things
sequentially
because
it
causes.
B
B
First,
I'm
going
to
jump
to
this
one.
I
noticed
a
new
pull
request
and
the
person
responsible
for
this
can
talk
about
it.
They
want,
but
I
want
to
show
you
how
the
pipeline
works.
So
if
you
go
to
actions
you
can
see
pipeline
runs,
and
this
is
the
pull
request
right
here.
That
was
that
was
just
done
and
if
you
see
the
pipeline
ran
seven
hours
ago,
when
this
pull
request
was
opened,
it
automatically
triggered
a
pipeline
and
it
passed
it's
a
nice
green
check
box.
B
So
that
sort
of
sort
of
shows
this
now
in
action
when
I'm
not
doing
a
demo
or
something
it's
actually,
you
know
live
and
it's
working
for
everyone
who
does
a
pull
request,
and
you
know,
if
you
look
at
it,
you
can
see
all
the
things
that
are
done
right,
built,
build,
pulls.
The
dockers
builds
all
the
code
and
you
can
see
first
first,
it
builds
pulls
p4c
and
builds
it,
and
it
doesn't
take
very
long
to
do
right.
You're,
like
30
within
30
seconds,
you've
compiled
the
p4
code.
B
You
know
if
it's
even
compiling
or
not-
and
so
that's
nice
to
know
that
you
get
it
within
30
seconds.
The
pipeline
knows
that
the
codes
compiles
and
then
it
does
all
these
other
steps
and
even
runs
this
simple
test
that
I
showed
before
this
traffic
test.
So
the
whole
thing
took
like
three
and
a
half
minutes,
so
I
want
to
show
you
another
action
example,
and
that
is
this
one.
B
B
B
B
So
so,
once
we
submit
this,
it'll
it'll
be
better
for
everybody.
It
also.
Besides,
just
the
speed.
The
image
size
in
the
cloud
is
important.
The
docker
image
size
like
right
now
we're
running
in
the
azure
free
runners
and
they
have
a
14,
gig
disk
size
and
a
7
gig
memory
size.
These
docker
images
are
12
gig.
You
know
you're,
really
almost
out
of
space,
just
with
your
run
image.
B
So
it's
real
important
to
try
to
break
things
into
smaller
images
that
are
loaded
on
demand
for
each
step
like
a
couple
of
one
or
two
gig
is
okay,
and
then
you
have
all
the
rest
of
your
runner
memory
and
disk
for
doing
the
actual
work
you're
interested
in.
So
that's
one
reason
why
you
want
to
make
small
docker
files
plus
just
speeds.
Everything
up
and
same
thing
goes
when
you
pull
these
onto
your
local
pc
and
do
development
you
don't
want
to
be.
B
B
You
know
by
comparison
when
you
build
sonic,
you
need
100
meg
in
your
machine
minimum.
It's
really
kind
of
a
burden,
so
we'll
try
to
keep
our
things
small.
If
we
can
so
that's
that
and
then
I
have
another,
I
don't
see
I'm
working
on
another
another
pull
request
which
I'll
submit,
hopefully
within
a
few
days,
maybe
sooner
and
that's
going
to
build
the
scythe
rift
server.
I've
been
working
on
that
for
a
few
weeks
and
I've
got
right
now.
B
Preliminary
I've
got
a
dev
branch
and
my
own
fork
and
it
builds
the
scithrift
server.
Let's
see
yeah,
it
generates
a
side,
threat,
client
and
server.
So
that's
a
basically
the
c
plus
plus
server
that's
bound
to
the
libside
and
the
lib
size,
the
psi
module
that
marion
explained.
You
know
many
weeks
ago
he
wrote
all
this
auto
generation
code
that
creates
the
libside.
B
Now
it's
getting
bound
to
a
scythe
rift
server,
and
that
means
you
can
run
tests
over
an
rpc
channel
and
I'll.
Let
me
refer
back
to
this
diagram
that
we've
seen
many
times
so
the
part
I'm
working
on
right
now
is
this
gray
shaded
area,
so
it
loads.
It
builds
the
scythe
through
server
and
I'll
talk
a
little
bit
about
that,
and
so
now
you
have
a
running
process.
B
P4.Org,
the
p4
lang
repository
this
will
be
replaced
with
the
modified
bnb
2
that
the
working
group
is
working
on
in
the
behavioral
model
working
group,
but
right
now
it's
vanilla
and
it
still
can
it
can
still
perform.
You
know
tests
like
v-net
test,
for
example,
it
just
doesn't
have
the
connection
tracking.
B
B
B
So
progress
to
date
is
I've,
got
this
server
running
and
I
have
I've.
Just
barely
got
it
working
last
night,
I
can
make
a
thrift
connection
set
up
a
session
and
I
haven't
written
any
tests
yet,
but
the
channel
is
open
and
and
talking
right,
so
a
thrift
connection
is
made
and
that's
kind
of
like
a
good
sign.
B
You
know
the
sockets
are
set
up
the
thrift
connection.
All
the
libraries
are
in
place.
I
want
to
also
you
know,
throw
throw
a
nod
to
intel.
B
They
did
this
scythe
thrift
store
server
generation
in
the
side,
repo
and
I'm
utilizing
that
so
that
big
framework
which
was
done-
and
you
know
kind
of
like
late
last
year
and
improved
earlier
this
year-
that's
really
responsible
for
being
able
to
do
this,
so
it
was
quite
a
huge
effort
and
I've
been
utilizing
that
found
a
few
issues
in
trying
to
integrate
with
the
dash
site,
side
headers.
But
I've
already
got
issues
and
prs
filed
to
fix
those.
B
So
we
should
be
good
and
I've
been
synchronizing
with
intel.
You
know
offline
on
this,
so
we're
getting
pretty
good
to
go.
Marion.
We've
spoken
last
time
about
some
fixed
side
functions
that
need
to
be
added
to
this
lib
site.
Things
like
psi
api
query,
those
kinds
of
administrative
functions,
I
dummied
them
all
out.
B
Okay,
at
least
so
that's
a
to
do
for
marion
and
he's
aware
of
that,
but
the
dummy
code
is
there
and
you
can
just
flesh
it
out.
It's
all
linking
and
running
as
far
as
I
know
so,
and
then
I'm
the
scythered
client.
I
don't
really
have
this
in
the
diagram
because
I
just
came
up
with
it
in
the
last
day
or
two
there's
another
docker,
that's
built
that
runs.
It
has
all
the
code
and
libraries
needed
to
run
the
scythethrift
client.
B
B
So
this
is
something
reshmo
or
intel.
People
would
be
very
interested
in
because
they're,
real
strong
users
of
ptf
all
this
docker
for
scyther
client,
will
contain
all
the
libraries
will
contain
everything
from
the
psi
repo
that
you
need
to
write
ptf
tests.
In
fact
it
will
have
all
the
existing
tests
and
frameworks,
so
you
can
actually
just
use
it
in
place
in
the
dash
repo.
You
don't
have
to
do
any.
B
D
Any
questions
hey
chris,
this
is
yeah.
This
is
great
work.
Thank
you.
So
much
much
appreciated
this.
This
is
a
lot
of
work.
One
quick
question
on
I
I
believe
the
next
step
would
be
integration
with
snappy
for
traffic
testing.
B
Yes,
yes,
we
need
to
start
writing
some
actual
tests
and
we
will
be
doing
that.
You
know
in
the
near
future,
since
this
framework's
working.
B
Okay,
great
thanks
yeah
and
you
know
we'll
will
we?
What
I'm
hoping
will
happen
in
the
near
future?
Is
mario
and
you
were
going
to
write
like
one
or
two
exemplary
v-net
tests.
I
believe
yeah.
B
Yeah-
and
we
can
talk
about
how
you
want
to
go
about
that,
but
you
can
just
write
another
c
plus
plus
program,
you
know
and
send
some
packets
with
scappy
if
you
want
or
just
sketch
out
what
you
want
to
do,
but
we
can
make
that
our
one
of
our
first
real
tests,
we'll
figure
out
a
way
to
use
our
traffic
generator
to
generate
those
packets.
B
Then
we'll
have
like
a
working
example
that
people
can
look
at
and
understand
and
he
said
I'll
be
putting
quite
a
bit
of
work
into
this
area
in
in
the
coming
weeks
and
months.
So
we'll
have
a
growing
body
of
tests
and
then
what
we
hope
is
we
can
create
some
nice
use
cases
and
design
patterns
that
you
know
developer
can
come
in
and
go.
Oh,
I'm
adding
a
new
feature.
I
want
to
test
feature.
I
see
how
it's
done
just
copy
and
paste
and
change
the
details.
That's
that's
the
desired
mo
right.
B
When
we're
doing
these
kind
of
things,
we
want
it
to
be
mechanical
and
easy
yeah
and
one
more
ask
if
we
start
coming
up
with
a
json
schema
format
to
be
able
to
represent
configurations
in
a
agnostic
way,
because
we
want
to
eventually
generate
the
scip
tests
and
eventually
cyretis
tests
and
even
gnmi
tests
from
those
schema
and
it's
kind
of
a
big
job
to
come
up
with
a
complete
schema
for
the
whole
dash
config.
So
what
I
recommend
we
do
is
incrementally.
B
E
B
Okay:
okay,
okay,
we
should
talk
about
that
pr
for
a
second
yeah
in
a
moment.
Maybe
I'll,
let
you
talk
about
your
pr.
It's
been
baking
for
four
months
since
time
to
come
out
of
the
oven.
I
think
so.
I
just
want
to
give
people
the
broad
strokes
of
what
I'm
working
on
and
also
in
these
upcoming
pr's
I've
taken
the
documentation
and
hopefully
made
a
lot
better.
B
F
Chris,
if
you
I
don't
mind,
I
just
want
to
say
one
or
two
sentences
about
the
side:
thrift
yeah.
F
So
we
recently
have
done
a
detailed
design
review
of
the
changes
in
the
scift
sciptf
test
framework
with
the
small
group
here,
and
we
will
be
doing
that
in
the
broader
group
as
well.
At
the
moment.
Basically,
we
have
made
all
the
changes
in
the
automated
framework
test
framework
to
adapt
it
to
the
dash
devices
with
fewer
ports
and
also
actually
adapted
a
lot
of
test
cases
that
are
existing
to
work
with
the
dash
scenario.
F
So
currently
we
are
trying
to
test
it
with
use
it
with
the
real
hardware,
and
we
want
to
make
sure
that
we,
you
know,
run
these
tests
with
the
virtual
simulation
environment
that
we
have
so
that
it's
all
backward
compatible
so
that
the
eventual
framework
and
the
test
cases
will
be
able
to
run
on
all
the
different
devices
switch
and
dash
devices
as
well.
So
at
that
point
we
can
do
a
review
in
the
community
in
broad
community
here
before
I'm
streaming.
F
We
would
like
to
make
this
pdf
framework
itself
available
to
at
least
keysight
and
microsoft,
because
we
are
working
together
on
that.
But
if
anyone
wants
access
to
that,
we
will
be
able
to
provide
that
access
as
well
before
we
actually
upstream
it
to
psy
yeah,
that's
pretty
much
it
and
we
could
collaborate
on
the
v-net
test
case
together,
because
we
are
also
working
on
that
at
this
moment.
So
with
yourself,
yeah.
B
There's
a
lot
of
a
lot
of
people
are
lining
up
all
together
right
now
to
start
doing
these
tests,
it's
nice
that
it's
all
coming
together.
At
the
same
time,
you
know
having
working
with
this
site,
thrift
server
other
than
a
few
tiny.
You
know
adjustments
that
came
up
when
I
started
doing
the
dash
use
case.
It
really
worked
well
and
just
to
reiterate,
I'm
using
psi
as
a
git
sub
module.
So
it's
actually
like
think
of
it
like
a
symbolic
link
to
another
repo
and
it's
it's
controlled
by
versioning.
B
So
you
always
know
what
you're
getting
it's
not
just
latest
it's
a
specific
commit
and
we
can
update
that
at
any
time
to
track.
You
know
the
evolution
of
psy
repo,
this
entire
repo
and
then
the
test
cases
are
also
going
to
be
available
in
the
cythrip
client
docker
that
I'm
creating,
which
means
you
basically
have
the
entire
ptf
test
repo
built
into
the
docker
image.
B
So
you
don't
have
to
like
copy
or
bring
anything
in
it's
just
there,
and
so
the
implication,
regimen
and
we'll
work
together
on
you
know
getting
the
workflows
and
how
to
do
that.
It's
all
right
there
for
you,
you
don't
have
to
do
any
adjustments.
The
whole
set
of
test
cases
there.
You
just
have
to
invoke
the
ones
you
want
so,
but
I
probably
won't
have
be
able
to
put
much
time
into
trying
out
the
ptf
test
cases
myself.
B
B
I
also
want
to
thank
people
who
tried
out
the
dash
repo
workflows,
the
make
system
and
all
that
people
have
been
giving
me
some
feedback
either
directly
or
offline.
You
know
there's
a
few
still
few
glitches
like
file
permissions
and
things.
Unfortunately,
the
docker
files
that
I
uploaded
they
have
like
chris
as
the
username
inside
the
images,
and
then
people
have
to
do
a
change
permission
or
change
on
in
in
their
own
workspace.
So
that's
something
you
find
out
when
some
other
victim
tries
your
code.
B
You
go
oh
assumptions,
it
worked
for
me
and
it
works
in
the
pipeline
because
root
runs
everything,
but
as
soon
as
another
user
tries
it
to
run
it,
you
find
these
things
so
I'll
be
working
out
those
wrinkles
and
please,
if
you
have
to
hack
something
to
make
this
work,
let
me
know
don't
just
move
on
because
I
won't
know
so.
You
know
bud's
giving
me
some
feedback
recently
on
that
ownership
issue,
so
I'll
solve
all
that.
B
B
So
that's
kind
of
my
presentation
I
just
can't
can
prince
to
you
is
prince
on
the
call.
A
He'll
be
on
at
9
30.
He
had
to
attend
a
9
to
9
30,
but
he'll
be
on
right.
After
that
we
always
have
gohan
so
gohan's.
Here
too,
if
you
have
a
question,
do
you
have
a
sonic
question.
B
No,
it's
more
about
the
jason
schema.
I
keep.
I
keep
hounding
print
for
that
and
I'll
just
ask
him
again.
I
realized
we
could.
We
could
make
the
ask
a
lot
smaller
by
saying,
let's
just
use
as
much
as
we
need
to
start
with
and
and
also
mercha
can
talk
about
what
he's
done.
G
D
H
D
A
quick
question
chris
just
before
we
move
on
while
we
are
on
this
topic,
so
where
does
the
github
action
start
today.
D
D
B
No,
no,
no
we're
actually
doing
a
lot
more
than
that
and
I'll
just
review
from.
Let's
see
previous.
But
let's
just
look,
for
example,
the
most
recent
pull
request
and
and
look
at
the
actions.
B
B
It
it
compiles
the
p4
code,
it
generates
the
api
with
the
auto
generation
framework,
it
spins
up
a
couple
of
v8s.
It
launches
the
bmv2
switch.
B
It
does
a
couple
simple
psi
table
accesses
which
actually
go
to
the
switch
and
use
p4
on
time
and
actually
sends
packets
through
the
switch.
So
it's
installing
snappy
and
xcsc
traffic
generator
and
sending
a
thousand
packets
or
through
the
switch
and
verifying
that
they
come
back.
There's
extra
packets
that
come
in
because
linux
linux
insists
on
donating
extra
lovely.
You
know
control
plane,
packets
into
the
the
ace
that
we
don't
want,
but
we
ignore
them,
so
it's
actually
sending
packets
through
the
switch.
B
So
it's
actually
doing
a
real
traffic
test
and
what
we're
going
to
do,
okay
is
is
start
doing
configuration
of
some
of
the
dash
services
like
v-net
and
then
do
more
stringent
packet
tests.
This
is
just
echoing
udp
packets
and
making
sure
they
come
back.
So
thousand
packets
is
just
arbitrary,
so
sending
500
packets
into
each
port.
They
come
back,
we
count
them
say
yeah.
B
They
went
through
we'll
start
doing
tests
where
we
look
at
the
packet
contents
and
make
sure
that
the
proper
pipeline
transformations
were
done
and
in
the
later
pipelines,
let's
see
okay.
This
is
the
one
that
I'm
working
on.
This
is
in
my
fork.
It's
not
a
pull
request
yet,
not
only
after
I
run
the
xcsc
traffic
enters,
I
actually
spin
up
the
scif
server
and
verify
that
it
runs,
and
you
know
in
another
hour
or
two
I'll
have
another
test
here.
B
Does
that
explain
it
that
helps.
D
I
D
J
D
Also,
we
run
those
tests
and
you
know
today
you
are
showing
the
xcrc.
Is
this
basically
going
to
be
replaced
with
snappy.
B
Snappy
is
a
client
library
that
makes
xcsc
traffic
generator
easy
to
use.
So
snappy
is,
is
like
the
client
live.
It's
it's
very
pythonic.
There's
a
go
version
too.
If
go
as
your
gem,
you
can
write.
That's
in
go
I'll,
be
using
python,
but
you
know
we
have
certain
people
that
love
to
use
go
and
they
do
it
that
way,
but
so
the
software
traffic
generator
that
we're
using
is
called
xcc
and
the
free
version,
which
is
you
know,
available
in
docker
hub.
B
It
has
some
performance
caps
built
in
because,
of
course,
we'd
like
to
sell
licensed
versions,
but
the
free
version
is
totally
appropriate
for
these
kind
of
tests
and
snappy
would
be
the
the
client
library
that
programs
it
and
there's
already
an
example
of
that.
I
did
review
this
once
briefly
that
if
we
look
at
the
code
under
test
test
cases
pmb2
model,
so
this
is
not
even
a
five
test.
It's
just
straight
python.
It
imports
the
snappy
library
and
it's
just
it's
on
pi
pi.
B
So
you
just
do
a
pip
install,
but
that's
even
done
automatically
in
this
build
workflow.
You
don't
have
to
do
anything
manually,
except
you
know,
clone
the
module
clone
this
repo.
So
it's
just
the
script.
Does
the
following
send
a
thousand
packets
from
one
port
to
another
rate
of
a
thousand?
So
it's
just
here's
the
python.
You
know
you
can
create
flows.
B
I
won't
walk
through
all
the
details
and
we
have
you
know
we
have
actually
sites
under
open
traffic,
generator,
repo
and
github.
There's
links
to
all
those
sites
in
this
read
me
in
our
readme
here
in
dash,
so
you
can
follow
up
and
learn
about
it
and
there's
even
a
slack
support
channel.
So
you
can
just
reach
out
to
us
directly.
H
Chris
sorry,
yeah
yeah,
you
were
going
through
my
pull
request
to
the
bill
pipeline.
So
I
not
a
question
there.
B
Yeah,
let's
I
can
go,
go
to
the
pipeline
and
that's
that's
the
current
whoops
yeah.
I
wanted
to
go
back
to
actions.
B
Now
remember
this
is
what's
in
maine
right
now,
once
my
pull
request
is,
is
merged
in
it'll,
look
different,
it
will
be
more
containers,
more
smaller
ones,
so
yeah
first,
it
pulls,
and
this
is
being
pulled
right
from
docker
hub,
the
p4
lang,
docker
hub
repo
or
registry,
and
so
that
pulls
p4c,
and
then
we
built
we.
We
compile
the
code
and
generate
the
json
files
for
for
the
bmv2.
B
H
My
question
was
about
yeah,
so
my
question
is
about
this
doctor
here.
B
H
So
is
we
have
to
do?
We
have
to
build
this
on
our
own
for
our
development?
No,
you
don't
it's
pulled
from
docker
hub,
so
you
don't
have
to
eat
so
one.
H
So
one
issue
I
was
facing
with
this
one
is
that
so
this
dot
this
container
here
has
some
users
and
groups
that
are
built
into
it.
While
it
was
built.
B
H
B
H
How
do
we
share
files
or
share
volumes
with
this
one,
because
the
user
permissions
are
getting
messed
up
with
that.
B
Right
so
I
did
mention
that
and
I'll
expand
on
it
a
bit.
I'm
sorry.
You
ran
into
that
bud.
Just
reported
it
to
me
yesterday
as
well,
and
he
had
a
simple
work
around
and
I'll
I'll
share
that
I
mean
I
can
probably
copy
it
down
mia.
B
He
ran
the
same
thing.
I
haven't
spent
time
fixing
that,
but
I
will-
and
I
mentioned
earlier-
that's
one
of
the
one
of
the
you
know
things
that
was
discovered
when
other
people
started
using
this.
B
H
That's
how
I
yeah
I
already
had
a
pre-built
one,
so
I
was
able
to
use
your
command
line
option
to
override
with
my
container,
and
I
was
able
to
proceed
so
right
and
just
wondering
how
to
do
it.
B
Yeah
yeah,
when,
when
that
happens,
what
I
would
say
is
let
me
know,
and
then
I
can,
because
I
I
didn't
realize
that
problem
was
built
in
there,
other
people
who
tested
it.
They
may
have
built
it
themselves,
so
they
didn't
pull
it
from
docker
hub.
So
it
was
like
an
undetected
bug.
B
Github
runner
everything's
running
as
and
so
just
avoid
the
problem,
but
I
shouldn't
have
to
do
that,
so
this
may
actually
be
the
the
root
cause
for
a
reason
why
I
had
to
go
to
root
because
having
problems
I
didn't
realize
that
was
the
problem,
so
I,
what
I
want
to
do
is
I'll
figure
out
a
way
to
solve
that
problem
in
a
graceful
way,
so
that
when
people
use
it
locally,
I'll
probably
create
a
runner
like
dash
user
or
something
and
put
in
the
right
ownership
changes
in
the
make
files.
B
I've
already
ran
into
that
problem,
so
this
is
what
I
just
typed
in
the
chat
window,
the
command
that
bud
said
fixed
the
problem
for
him
this
chamod.
So
you
just
add
and
read:
write
permissions
to
group
other
in
these.
In
these
directories,
psi
tests
and
bnb2.
B
B
So
it's
going
to
look
different.
The
next
pull
request,
we'll
have
more
docker
images,
so
p4c.
B
Now,
I'm
not
using
p4
lang,
I'm
using
even
smaller
when
I
took
the
p4
language
and
stripped
stuff
out,
so
it's
even
smaller
and
then
the
scythe
thrift
image
takes
only
32
seconds
to
pull,
and
then
this
bnb2
builder
image
it's
probably
not
much,
because
I
think
it
shares
a
lot
of
content
with
this
one.
B
So
you
know
when
docker's
a
layered
file
that
uses
a
layered
file
system
called
overlay
file
system
where
all
these
layers
are
stacked.
Almost
like
you
know,
transparencies,
where
the
total
file
system
is
a
combination
of
layers,
and
so,
if
you've
already
got
underlying
layers
that
are
common,
you
don't
have
to
pull
them
again.
Docker
is
very
smart.
It
just
pulls
what
it
needs,
so
it
can
be
pretty
fast
if
you
construct
them
properly
and
one
of
the
things
is,
you
have
a
lot
of
smaller
images.
B
The
the
runtime
footprint
is
smaller
and
the
disc
footprint
is
smaller
and
if
you
share,
if
you're
sharing
layers
between
a
lot
of
them,
the
deltas
is
very
small.
So
the
actual
footprint
on
your
disk
can
be
very
much
smaller
than
you
think,
and
then,
let's
see,
there's
another.
Oh
I
don't
have
another
one,
but
there'll
be
some
other
images
like
there'll
be
a
side,
thrift
server
and
a
scientific
client.
B
Speaking
of
docker
images
christine
I
christina
and
I
are
working
with
some
of
the
microsoft
I.t
people
to
set
up
an
azure,
hosted
container
registry,
acr
added
azure
container
registry,
and
so
we'll
move
all
the
images
from
chris
summers
and
docker
hub
to
an
azure
instance,
and
we
just
got
the
the
urls
for
that
this
morning,
so
sometime
in
the
near
future,
I'll
try
to
migrate
to
that
I'll,
try
to
fix
the
username
so
that
not
chris
in
the
images,
even
though
I'd
love
people
to
remember
who
wrote
them
and
and
then
we'll
also
be,
hopefully
migrating.
B
The
runners
to
dedicated
azure
runners,
so
they'll
be
more
performant
and
you
know
sometime
down
the
road
we'll
you
know
this
could
be
many
months,
we'll
we'll
start
utilizing
some
of
the
same,
build
methodologies.
That
sonic
does
where
you
have
dashboards
and
art
artifact
repositories
with
all
the
artifacts
stored,
and
you
know
logs
and
everything
it'll
be
you
know,
industrial
strength.
E
A
Yeah
yeah:
let's
stop
sharing
right
right,
so
yeah
and
then
five
minutes.
For
me
at
the
end,
the
mercha
did
you
want
to
go
ahead,
yeah.
A
E
E
I
did
this
dash
one
vpc
one
ip
json
and
I
use
a
sample
that
was
there
and
they
placed
the
value
in
here
plus.
Did
a
few
changes
plus
removed
a
lot
of
things
which
were
not
yet,
let's
say
available,
like
aha
groups
and
so
on,
which
are
part
of
the
sample,
but
in
our
test
we
are
not
doing
anything
with
at
this
time.
So
I
just
remove
the
section.
So
we
have
all
the
vpc
numbers,
the
ip
addresses
and
everything
based
on
the
schema.
E
Now
the
ids
here
are
a
bit
how
to
say,
like
not
necessarily
human
readable.
Like
a
big
uid,
I
try
to
make
it
human
readable
to
be
easier
to
follow
everything
or
put
names.
Let's
say
yeah.
This
is
for
run
ip
and
now
the
intent
and
we
are
almost
ready
for
any
test
that
is
scaled
from
now.
E
It
will
not
be
a
dump
of
a
json
file,
it
will
actually
be
a
python
script
that
you'll
have
to
take
run
and
that
will
generate
the
json
file
because
it
gets
to
quite
a
significant
size,
sometimes
600
meg
one
gig,
and
something
when
you
scale
it
all
the
way
hi,
but
for
the
first
sample
we
are
yeah.
I
just
use
the
json
format.
E
The
only
comment
I
have
here
is
that
what
chris
chris
you
said,
you
want
like
a
config
that
can
be
loaded
and
so
on,
but
when
this
is
presented
there
is
a
note
that
who
says
this
is
just
example,
but
this
is
not
what
it
will
be
in
production.
So
it's
a
bit
of
a.
E
K
E
A
I'm
sorry
I'm
trying
to
follow
that
that
thought
so
we're
asking
for
something
that'll
be
used
in
our
production
environment
of
the
of
the
sonic
team.
B
Well,
so
my
interpretation
of
this
conversation
is
there's
been
an
ongoing
kind
of
mismatch
in
in
let's
say
what
the
importance
or
use
of
this
json
is.
I
we
need
some
kind
of
an
intermediate
format,
so
we
can
test
the
middleware,
and
you
know
it's
been
stated
since
day.
B
One
psi
is
going
to
be
the
integration
point
for
all
the
implementations
and
sonic
just
sits
on
top
of
it,
but
the
psi
has
to
be
standardized,
so
we
need
a
standard
way
to
represent
test
cases
and
that
don't
depend
on
or
assume
a
northbound
interface
or
an
sgn
controller.
We
really
want
to
keep
these
things.
You
know
kind
of
decoupled
because
otherwise,
if
you
only
have
one
schema
at
the
very
top
of
the
northbound,
then
you
need
the
whole
stack
to
be
able
to
do
your
testing,
otherwise
everything's,
just
ad
hoc
all
the
time.
B
So
we
really
need
a
canonical
intermediate
format
and,
as
I
have
talked
about
in
the
past,
we
intend
to
create
a
platform
that
will
take
that
intermediate
format
of
configuration
and
automatically
generate
psi.
Api
tests
through
scithrift
psi
redis
tests
that
go
through
the
redis
db
and
then
hopefully
gnmi
tests,
because
all
the
configurations
they're
pretty
similar
because
there's
not
a
lot
of
deep
transformations.
B
B
But
even
if
that's
true,
then
I
would,
I
would
say:
maybe
we
do
want
one
just
so
we
can
get
for
testing,
we
need
it.
We
need
a
test
definition
language
and
so,
let's
just
make
it
this
json
format
and
agree
on
it
and
before
you
join
the
meeting
prince.
I
was
saying
because
I've
been
asking
this
for
this
for
a
while.
It
feels
like
boiling
the
ocean,
because
it's
a
large
schema
and
to
try
to
go
through
it
with
a
fine-tooth
comb
and
make
it
perfect.
B
B
D
Chris,
a
quick
question
on
what
you're,
basically
talking
about?
Could
this
canonical
format
or
a
you
know
the
schema
that
you're
talking
about?
Could
that
be
a
dash
appdb
which
eventually,
just
you
know,
drives
us
iap.
B
I
mean
it
could
be.
I
don't
think
we
care
exactly
what
the
schema
is
precisely.
It
just
needs
to
represent
the
config,
and
if
it's
app
db
level,
I
think
the
thing
is
it
has
to
not
require
a
big
transformation
between
app
db
and
psi.
I
mean
otherwise
you're
you're
rewriting
a
lot
of
orchestration
code
in
the
test
platform
right.
Exactly
exactly
and
the
assumption
is
it's
a
pretty:
it's
not
even
really
a
much
of
a
transformation.
It's
almost
just
like
a
translation
right
like
it's.
It's.
D
D
B
Yeah
I
mean
I've
never
tried
to
take
a
stand
on
how
that
schema
should
map
to
app
debut
or
whatever.
As
long
as
it's
something
that's
easy
to
convert
into
the
different
interfaces
and
it's
it's
been
an
assumption
and
no
one's
really
contested
it
that
it's
not
much
of
a
transformation.
Unlike
certain
you
know,
there
are
some
sonic
fix
where
there's
a
big
transformation
and
you
don't
want
to
have
to
write
orchestration
type
code
in
your
test
platform
right.
B
Right
right-
and
you
know,
or
roman
numerals
and
and
decimal
digits
right,
it's
just
a
slight
change.
It's
not
not
a
big
spread.
D
Right
so
I
I
see
that
prince
has,
you
know,
made
an
attempt
to
explain
this
thing
or
give
an
example
of
something
like
this
in
the
sonic
hld
yeah.
D
So
if
you
were
to
look
at
there,
there
is
a
you
know,
example
of
how
things
are
going
to
be
stored
in
the
dash,
so
there's
a
very
simple
small
example
of
dash
app
db
and-
and
we
can
see
there
how
the
schema
is
defined
as
it
it
flows
from
the
northbound
into
the
fdp
and
eventually
to
the
side,
so
that
might
serve
as
a
as
an
example
for
us
to
see
whether
or
not
this
is
something
that
we
can
reuse
for
for
testing
purposes.
B
Yeah
and
now
merchant
was
just
showing
an
example
over
he
took
the
the
let's
say
the
json
example.
That's
not
trying
to
be
definitive,
it's
just
an
example
just
to
get
discussion
going
and
he
just
took
it
and
kind
of
polished.
It
up
a
bit
changed
a
few
things
and
said:
let's
try
this,
so
I
don't
know
how
that
relates
to
what
you
just
proposed
hannah,
but
we
do
need
to
start
somewhere
and
start
saying.
B
You
know
this
is
it
and
if
it
becomes,
you
know
the
de
facto
standard
by
the
fact
that
we
start
using
it.
That's
fine
by
me.
We
just
need
to
do
something
and
the
reason
I
keep
pushing
this
and
I'm
upping
it.
A
bit
now
is
we're
about
to
make
a
significant
investment
in
framework
that
starts
consuming
and
or
generating
this
data
and
translate
it
into
apis,
and
we
don't
want
to
have
to
backtrack.
B
A
B
K
B
And
again,
just
to
be
pragmatic,
let's,
let's
try
to
do
a
piece
at
a
time
like,
let's,
let's
try
to
work
really
hard
on
the
part
that
we're
going
to
need
first,
for
example,
this
first
test
cases
martin's
going
to
help
define
and
intel
is
going
to
help
define,
and
we
also
look
at
what
merchant
was
just
sharing
in
that
pr.
B
Do
we
want
merchants?
You
want
to
make
a
pitch
to
get
this
pr
accepted
soon,
and
I
think
we
have
gohan
here.
He
was
concerned
about
where
this
test
repo
would
live,
but
I
think
you
know.
E
Yeah
pending,
where
it
lives
and
the
other
one
is,
I
was
still
finalizing
some
work
on
the
cps
test.
For
this.
There
are
few
things
which
I
still
need
to
make
small
changes
and
once
those
are
done
yeah
we
should
accept
it,
I'm
trying
to
make
them
today
I
was
trying
last
week,
but
let's
see.
E
And
once
this
is
accepted
I'll
make
the
one
for
48,
000
ips
and
then
we're
gonna
move
to
four
point:
eight
nine
point:
six
million.
E
A
Great,
so
can
you
guys
see
my
screen.
A
Okay,
just
to
wrap
it
up,
then
you
know
these
are
the
these
are
just
my
tracking
work
items
from
our
conversations
and
these
you
know
top
four
are
done
and
it
looks
like
you
know,
prince
mentioned
today.
He
he
did
a
review
of
pr.
Was
it
137
or
127
chris,
and
he
he
just
was
going
to
look
at
it
a
little
more
today
right.
So
I
talked
to
him
about
that
right,
and
so
here
we
have
we've
we're
covering
this
side
through
server
integration.
A
We're
talking
about
this
schema
here
and
we've
got
overlay
test
cases
here.
Dash
container
may
be
ready
at
the
end
of
august
is
what
we
were
thinking.
Of
course.
We
always
need
volunteers
to
write
test
cases.
A
We
have
a
the
counters
metering
and
telemetry
and
we're
looking
to
provide
some
updates
on
that
of
what
counters
we
think
we'll
need
soon,
and
then
we
have
the
the
memory
footprint
after
we
get
through.
You
know
our
initial
swing
at
this
and
see
what
kind
of
data
we
can
provide
back.
So
these
are
the
work
items
I
have.
If
you
want
me
to
add
any
more.
Let
me
know.
A
Here
yeah,
should
we
do
a
separate
item
for
that
proposed
demo
when
we
were
ready.
F
I'm
sure
that's
fine
too,
and
for
the
overlay
tests
please
add
intel
as
well.
J
H
Yeah,
so
this
is
mukesh
from
amd,
so
I
just
wanted
to
bring
up
that.
I
have
raised
the
created
a
pr
for
the
recent
period
that
you
saw
there
like
what
chris
was
showing,
so
that's
to
address
some
gaps
that
we
have
in
the
routing.
H
A
A
All
right,
thank
you,
so
I
hope
you
guys
are
okay
with
this
format,
I'm
using
it's
helping
me
more
to
keep
track
so
so
we're
six
minutes
away
from
ending.
So
anything
else
for
for
the
rest
of
this
meeting.
G
B
Well,
my
my
view
on
this
is
right.
Now,
where
they
live,
is
fine
and
I
know
the
long
term
we
might
want
to
put
them
in
the
sonic
management
repo,
but
isn't
there
also
a
governance
issue
with
the
linux
foundation
changes
et
cetera?
That
was
right.
A
Did
drop
and
so
before
the
migration
to
linux
foundation,
I
remember
we
were
keeping
it
small,
keeping
it
fast
and
keeping
it
where
it
is
until
we
could
at
some
point
you
know
integrate,
then
the
linux
foundation
move
happened,
and
so
we
agreed
to
just
keep
it
here,
because
that
that
content
dash
content
didn't
move
to
linux
foundation.
That
was
my
understanding.
K
The
test,
of
course
we
have
to
have
that
in
the
sonic
management,
because
that's
where
the
infra
will
be
defined
and
and
anyone
can
use
that
as
the
toppo
or
the
topology
to
to
run
the
test
right.
So,
if
would.
A
K
Have
it
yeah
in
this
current
repo
and
see
how
it
thank
you
so
much.
K
B
K
But
but
chris
the
point
is
like
we
have
some
here
and
some
there,
which
will
be
like,
like
disconnected
test
right.
E
K
If
you
have
one
place
where
we
have
the
the
topology
defined,
and
then
you
can
use
that
for
different
purposes
like
right
right,
so
so
otherwise
it
will
like
this
section
will
be
like
may
be
used
by
very
few
people,
and
then
the
remaining
folks
will
be
using
another
infrastructure
and
that
and
the
test
from
another
repo.
J
G
G
I
mean
if
it
doesn't
matter,
and
you
can
proceed
you
know
independently
under
that.
Then
it's
probably
fine.
I
don't
know
what
happens
when
you
go.
There
does
something
change
where
somebody
comes
in
and
says:
hey,
I'm
I'm
now
leading
this
or
like
what
happens.
I
I
don't
think
that
would
happen,
but
we
should
probably
take
it
offline
and
figure
out
what
it
really
means.
A
And
and
I'll
I'll
leave
that
up
gerald
I'll
I'll
I've
talked
to
a
couple
sonic
leadership
before
and
I'll
take
that
up.
I
think
we
have
one
more
hand
in
the
air
before
we
have
to
go.
L
Yeah
hey:
this
is
vj
from
amd
contender,
so
just
wanted
to
bring
up
one
more
thing.
We
were
identified
a
few
gaps
or
enhancements
in
the
behavior
model
and
want
to
introduce,
like
a
tunnel
table
for
indirection,
that's
another
pr
which
is
in
the
works.
So
it
is
related
to
not
directly
to
any
phase
one
feature,
but
it
is
useful
for
phase
one
features.
K
L
Now
this
is
going
to
be
a
behavioral
model
change,
so
yeah.
It's
basically
like
introducing
a
tunnel
table
for
indirection,
something
that
we
noticed
in
the
behavioral
model,
which
will
be
useful
for
now
and
the
future
features
so.
L
So
yeah,
I
think
we
have
to
discuss
the
mechanics
of
that,
so
we
just
wanted
to
know
whether
we
can
go
ahead
and
introduce
it.
So
we
can
discuss
around
the
pr.
A
You
know
we
have
behavioral
model
meeting
tomorrow.
Vijay
you
have
the
invite
to
that
right.
L
A
We
could
probably
talk
about
it
in
depth
in
that
meeting.
A
Okay,
all
right!
Well,
it's
10.
I
unfortunately
have
another
call
but
yeah,
but
I
want
to
thank
everybody
today.
Lots
of
great
discussion,
I'll,
send
out
notes,
and
you
know,
put
the
recording
on
the
youtube
channel.
So
if
anyone
missed
it,
they
can
listen.
So
thank
you
very
much
and
thanks
for
coming
and
I'll
see
you
next
week.