►
From YouTube: SONiC DASH Workgroup Community Meeting Nov 16 2022
Description
Issue #235 sai_get_switch-attribute() returns 0 when requesting number_of_active_ports
Issue #233 inbound_routing scale
Reshma: cases have been merged into latest SAI now, however VXLAN processing is not correct yet. PTF request has been approved and will be merged soon
VolodmyrX: waiting on PR Merge in PTF (#176)
A
B
A
Did
notice
a
couple
of
issues
were
talked
about
this
week,
just
over
email
and
did
I.
If
we
want,
we
can
kind
of
go
over.
What
was
going
on
with
these,
but
I'm
going
to
share
it
differently.
I'm
sorry
share
my
whole
screen.
A
A
Mean
I
know
it's
just
brief,
but
yeah
go
ahead.
Chris
thank.
B
Quickly,
Christina,
that
you
know
we
also
from
Intel,
want
to
discuss
277
PR
I
think
we
had
quite
quite
a
few
questions
last
time
so
want
to
discuss.
I'll.
C
Yeah
277
I
just
made
a
comment
so
in
the
initialization
phase
of
PTF
and
Pi
test,
there's
some
querying
of
switch
attributes
in
order
for
the
test
to
proceed
properly.
C
For
example,
you
want
to
find
out
how
many
ports
are
on
the
device
that
kind
of
thing
and
for
a
long
time
those
attributes
weren't
being
implemented
in
the
behavioral
model,
lib
PSI
and
so
I've
added
enough
code
to
to
simulate
or
or
provide
the
right
information
in
order
to
satisfy
those
switch
attribute,
queries
in
the
beginning,
so
I
added
a
couple
of
fake
ports
who
will
not
fake
ports
default
ports,
one
and
two
and
and
then
some
other
things
like
default.
Vivan
id
vrf
id
Bridge
Etc.
C
So
basically,
I
did
add
some
attributes
here,
and
this
is
in
the
template
code
that
gets
turned
into
C,
plus,
plus
and
I've,
just
added
the
Callback
so
and
there's
also
a
placeholder.
We
can
add
more
more
callbacks
as
needed
if
we
ever
Implement
any
more
Port
apis.
But
it's
just
enough
to
get
past
the
test
and
initialization
and
doing
this
will
let
us
fix
these
tests,
for
example,
here
in
SCI
Thrift
test
here
in
PTF.
C
C
So
it's
a
pretty
it's
a
pretty
modest
pull
request
any
questions
about
this-
and
this
is
this-
is
a
blocker
for
a
couple
of
other
PRS
that
are
in
the
pipe,
so
it'd
be
good
to
get
these
finished
and
the
Vladimir
and
antenna
both
helped
me
get
this
spiffed
up
and
in
shape,
although
it
looks
like
I
need
to
delete
a
redundant
line,
I
somehow
got
in
there,
so
hopefully
Marion
will
have
a
chance
to
review
this,
because
it's
blocking
some
other
things
I
think
it's
pretty
pretty
modest.
C
So
that's
that's
that
one.
D
So
I
think
you
know
just
wonder
whether
you
know
which
this
is
mostly
like
a
switch
attribute.
Where
I
understand
so
you
know,
can
we,
you
know,
have
a
separate
file
like
a
switch.cpp,
something
like
that
to
to
do
it
or
we
we
put
into
the
utilities,
because.
C
I
think
it
was
just
a
convenient
place,
because
the
way,
the
way
all
the
code
generation
is
done-
and
you
know
there's
some
existing
Python
scripts
that
run
all
these
templates
through
and
then
put
it
in
the
right
place
to
build,
and
so
this
this
place
has
been
collecting
some
code
over
time.
I
suppose
it
could
be,
but
probably
need
to
get
a
little
bit
of
help
from
Marion
to
reorganize
some
of
this
or
also
take
a
little
more
retrofitting.
C
C
So
the
way
it
works
is
all.
This
code
is
run
through
some
ginger
templates,
and
then
it
sticks
it
into
another
directory
which
is
not
shown
here,
because
it's
done
only
at
build
time
via
lib
directory
that
then
is
compiled
into
livesci,
and
so
rather
than
try
to
restructure
all
this
I've
just
been
adding
code
to
this
template
file,
but
maybe
if
the
template
file
gets
larger
eventually
we
want
to
put
it
somewhere
else.
You
know
break
it
up.
B
Oh
Chris
is
this
related
to
a
voldemortnick's
issue
or
something
that
he
had
opened
earlier
related
to
the
support
in
BMV
tool.
B
E
Yeah,
so
there
is
a
few
issues
that
probably
opened,
and
this.
A
A
E
Issue
right
and
pull
requests
that
actually
Chris
helped
to
to
add
support
into
the
BMV
queue.
So
this
is
a
great
addition,
I
guess
and
I
I
think
we
need
to
review
as
soon
as
possible
and
merge
it.
C
F
D
F
C
Hi
and
Mario
so
we're
looking
at
this
PR
here,
which
we
were
hoping
to
get
your
review
on
soon
and
the
only
question
that
really
oh.
C
I'm
not
sure
I'll
have
to
ask
this
question,
so
the
question
was
the
main
question
is
in
this
utils.
We
put
a
lot
of
code
in
here.
C
That's
just
sort
of
collecting
for
the
fake
lib
PSI
callbacks,
like
you
know,
getting
certain
switch
attributes
and
Gohan
was
asking.
Maybe
this
should
be
in
its
own
file
and
not
in
this
utils
file.
C
C
C
Yeah,
let
me
take
it
I
guess,
I'm
trying
to
think
well,
I
would
need
to
modify
the
code
generator
python
script.
That
runs
these
that
correct,
because
it
has
kind
of
some
hard-coded
list
of
files
to
process.
F
C
Like
this
right
yeah,
so
it
still
has
still
templatized
okay,
but
it
can
be
in
a
you
know,
switch
I,
guess,
let
me
see
if
I
can.
Let
me
see
if
I
can
refactor
this
and
break
the
file
out.
C
So,
okay,
thanks
for
the
feedback
gone,
it's
been
on
my
mind
for
three
months
or
so
too.
So
I
guess
I
need
to
push
over
the
edge
I'll.
Try
to
do
that
shortly
so
that
we
can
get
this
approved
quickly.
Marion.
Do
you
think
you
could
take
a
look
at
this
and
provide
any
feedback
other
than
the
file
splitting,
because
this
is
barking
a
couple
of
others
all
right?
Thank
you,
good.
Okay,
that
was
productive.
G
B
Bam,
yeah
Christina
voldemortnick
will
be
presenting
to
Sony
San
because
he
has
a
few
things
to
show.
So
he.
B
B
Yeah
I
think
it's
Anton
has
done
a
lot
of
work
to
take
the
test
cases
from
different
places
and
actually
merge
them
all
into
one
new
directory
protest
cases
are
known
and
that
will
really
help
with
the
usage
in
the
CI
CD
as
and
when
we
have.
We
add
new
test
cases
and
they
decide
in
this
folder.
So
it's
more
standard
in
terms
of
the
location
but
also
very
much
easier
to
integrate
into
the
CI
CD.
B
So
yeah
and
another
thing
that
we
will
talk
about
after
this
is
that
the
auto
generated
test
framework
changes
for
Dash,
as
well
as
the
new
test
cases
for
Dash
and
the
hld
related
to
that
right.
B
The
high
level
design
has
been
merged
into
PSI,
I,
think
I
mentioned
it
last
week,
or
so
we
want
to
be
able
to
use
PSI
in
in
the
dash
and
the
dash
CI
CD
as
well
the
because
we
have
already
merged
that
code,
it's
available
as
part
of
the
latest
site,
but
there
was
one
hindering
issue
there,
which
was
some
vxlan
processing,
wasn't
correct
in
the
PTF
in
general.
B
So
we
have
made
a
pull
request
into
the
P4
land
pts
and
that
has
been
approved
and
you
know
we
are
going
to
be
merging
that
so
in
Voldemort,
with
Nick
will
be
merging
that
soon
once
the
CLA,
something
with
onf,
is
started
out,
which
is
in
progress
right
now,
so
yeah
so
with
277,
it's
basically
to
standardize
the
location,
but
in
the
process
a
lot
of
work
was
done
to
actually
take
the
test
cases
from
different
places.
To
put
it
in
this
folder,
so
waldemer
magnetic
and
Anton.
B
B
G
Yeah
I
can
I
can
play
with
one
second
Tim,
okay,
so
actually
there
is
not
much
change.
The
main
difference
from
the
previous
time
is
the
actually
that's
I've
made
the
fixes
to
the
last
comments:
I
renamed,
the
experimental
to
sanity
and
also
I,
created
that
subfolder
PDF,
which
is
to
to
make
it
easier
to
understand.
Actually
what
kind
of
this
cases
you
can
find
there?
So
that's
actually
all
updates
and
I
hope
that
we
covered
all
like
question
comments
by
the
way.
G
So
that's
also
was
some
confusion,
because
I
wrote
that
emerged
those
two
test
cases
into
the
single
one.
So
it's
not
deleted
tests.
It's
actually.
This
is
society
PTF.
This
case
is
that
I
moved
to
the
sanity
test,
so
that's
seated
to
delete
it
and
one
edited
the
same
test
cases
so
I
put
some
sanity
and
those
are
something
that
they
were
already
in
the
functional
folder.
G
So
please
take
a
look
and
if
you
do
not
have
any
major
comments,
I
would
like
to
verify
that
the
CI
work
in
the
actually
to
resolve
the
issue
with
the
verification,
and
we
can
like
then
merge
it,
because
I
see
that
there
is
also
ACL
test
cases
that
are
waiting
in
the
pr
and
I
would
like
to
have
all
of
them.
In
the
standard
location.
B
E
So
Anton
mostly
covered
this,
but
I
can
just
add
about
the
changing
into
the
docker
files
yeah,
so
Chris
Gohan
I
think
we
need
to
to
clarify
and
understand
here
how
we
can
proceed
with
this
pull
request,
because
at
this
moment
it's
you
know
still
Market
as
a
draft,
but
how
the
changes
into
the
docker
file
should
go
into
the
main
branch.
Should
we
merge
assays
or
we
should
defined
or
create
the
procedure?
How
you
know,
Docker
files
first
can
be
modified.
I
know
merge
to
separate
Branch
to
verify
the
CI
works.
E
C
The
readme
dash
Docker
files.
C
G
D
I
saw
that
I
showed
that
we
discusses
last
time
right.
So
you
know
this,
you,
this
side,
sweeper
client
right.
So
you
know
we
we
should
build
it
and-
and
you
know,
do
not
try
to
download
from
the
from
the
HCR
right
so
and
then
use
that
build
artifact
to
run
the
test.
E
Yeah,
so
so
we
will
view
it
by
the
way
we
test
it
and
we
verify
that
those
changes
works.
But
can
we
merge
these
changes
into
the
main,
so
someone
then
will
take
and
build
the
docker
container
and
upload
to
the
as
their
infrastructure
manual.
So
who
should
take
care
of
this.
C
Okay,
this
is
kind
of
a
two-part
answer,
the
the
short
version
and
then
the
long
version.
The
short
version
is:
if,
if
you
want
to
change
the
docker
file,
I
can
make
a
branch
in
that
in
the
dash
repo
that
you
can
merge
to
as
a
staging
branch
and
then
once
that's
accepted
and
it's
just
a
staging
Branch
I
can
accept
it.
C
It
will
publish
as
an
intermediate
step
and
then
once
it's
published,
it
can
be
merged
to
main,
because
then
the
file
will
be
there
be
pulled
the
the
lot
then
that
stock
man.
That
process
is
documented
and
I
mentioned
this
a
few
times,
I'm,
not
sure
if
people
have
had
a
chance
to
review
this
now.
The
longer
answer
which
Gohan
alluded
to-
and
we
talked
about
last
week-
is
to
restructure
some
of
these
build
procedures
to
conditionally
build
images
as
needed
and
only
pull
them
if
they're
available
otherwise
build
them.
C
C
So
if
we
just
start
building
all
the
images
every
time
we
need
them
in
a
make
file.
Excuse
me
it's
going
to
increase
the
run
time
for
everybody
both
in
their
personal
builds
as
well
as
CI.
So
I
think
we
need
to
proceed
a
little
bit
step-wise
here
and
right
now
we
have
a
procedure
that
works,
but
it
may
not
be
perfect,
but
you
know
perfect
sanity
of
done
so
to
Circle
back
Vladimir.
C
If
I
make
a
staging
Branch
for
you,
you
could
merge
this
to
that
or
you
can
do
a
PR
to
that
and
then
and
then
we
can
upload
that
quick.
B
Question
is
I
mean
it's
should
be
like
a
regular,
PR
March
right.
Why
do
we
need
a
staging
branch
and
all
those
things
for
this?
One.
C
So
if
we
do
it,
we
won't
have
any
confirmation
that
this
is
going
to
build
in
CI
until
we
put
the
main
at
that
time
may
be
broken,
so
we
want
a
staging
Branch.
The
staging
Branch
will
accomplish
the
publishing
and
then
the
remainder
of
the
workflow
can
proceed
if
we
just
push
this,
as
is,
if
you
just
do
a
PR
like
this,
you
can't
actually
publish
it.
Fails
because
you
don't
have
credentials
from
a
fork
and,
like
I,
said
this
link.
C
I
pasted
in
the
chat
explains
this
whole
publishing
phenomena
and
and
the
issues
concerned,
and
the
current
workflow
and
solution
may
not
be
perfect,
but
it
works
and
a
better
one
would
be
to
have
much
more
logic
in
there.
That
looks
for
images
if
they're
not
published,
build
them
on
demand
conditionally,
and
that
takes
that
takes
effort.
Science
build
engineer,
work.
E
See
it
looks
so
looks
like
we
need
to
put
something
in
the
backlog
to
improve
the
CIA
right
to
do
this
stuff
automatically
in
case
of
such
changes
that
we
did
so
at
this
moment
as
I
understand,
so
we
can
create
the
you
can
create
the
branch
we
can
rebase
the
changes
based
on
that
Branch.
We
can
remove
Mark
as
a
draft
right
and
we
can
merge
it
if
everything
is
well.
So
you
manually
will
update
the
the
docker
images
right
and
then
we
can
merge
into
the
mode.
It's.
C
Harder
to
produce
yeah
as
far
as
like
the
right
way,
the
long-term
way
I
actually
would
rather
not
be
the
person
owning
that
whole
project
I'd.
Rather,
someone
who,
let's
say,
is
plugged
into
the
whole
Sonic
way
of
doing
things
gets
involved
and
and
provides
some
advice
or
actually
you
know
the
talent
to
do
it
in
the
way
that's
aligned
with
the
full
Sonic
build
process,
because
you
know
it's
time
consuming,
build
engineer,
type
of
work
and
this
it
can
be.
C
You
know
pretty
involved
and
I,
don't
necessarily
want
to
be
constantly
in
the
in
the
loop
of
that.
So
you
know:
I
have
no
qualms
about
someone
redoing
this
the
right
way,
but
you
know
just
want
to.
G
D
The
transition,
so
how
about
can
this?
You
know
this
Docker
file
change,
be
a
separate
PR,
so
you
know
we
can
use
the
create
it,
and
you
know
the
the
message
yeah
and
then
the
and
then
once
it's
merged
it
will
be
build
and
upload,
and
then
we
can,
you
know
test
all
these
tests.
Pr
right
is.
That
is
that
is
that
doable.
D
They
you,
you,
don't
have
to
create
the
save.
Some
of
efforts
from
you
don't
have
to.
You
know,
create
that
staging
Branch
right.
So
we
just
merge
this
Docker
file.
Then
you
know
upload
and
then
the
rest
of
the
code
can
be
tested.
C
You
know
the
problem
is
the
tags
that
Define,
which
Docker
file
version
is
being
used
to
find
not
only
the
docker
file
build,
but
the
ones
that
are
running
the
make
file
to
do
the
CI
they're,
not
separated,
and
that's
probably
another
area
of
improvement.
So
if
you
update
the
tag,
it's
also
the
tag
that
says
which
Docker
file
to
pull
to
run
CI.
C
If
you
change
a
Docker
file,
it's
going
to
rerun
CI
because
you
know
just
building
and
making
Docker
files
aren't
aren't
decoupled
from
the
overall
test
process.
C
That's
not
stated
very
well,
but
there's
an
environment
variable
file
that
tells
which
image
is
being
used
for
a
particular
Docker
file
and
that
environment
variable
is
used
not
only
to
build
a
Docker,
but
also
to
use
it
in
a
CI
run.
So
if
you
can't,
you
can't
just
change,
you
know
one
thing
at
a
time.
G
E
G
E
F
E
Yeah
so
last
time
we
also
discussed
about
the
using
why
we
don't
use
the
official
side
repo
in
the
dash.
So
we
proceed
a
little
bit
with
that.
So
we
created
one
change
in
the
official
PDF
repo
before
link
that
fixes
one
of
the
issue
that
we
found
with
the
vxon
packets,
so
it
has
been
already
accepted,
but
still
not
merged.
So
once
it
is
merged,
looks
like
we
can
try
to
proceed
with
the
moving
Dash,
submodel
and
size
submodel
into
the
official
one.
E
But
here
I
see
two
ways:
one
is
the
that
we
still
will
need
to
click,
propose
updating
the
PDF
inside
the
PSI
repo
official
repo
to
to
move
to
the
newer
and
latest
version
of
the
PTF
and
another
way
is
to
use
or
update
the
PDF
some
model
inside
the
dash
Ci
or
inside
the
dash
so
yeah.
So
we
still
wait
for
for
the
polar
bus,
marriage
once
it
is
merged,
so
we
can
then
decide.
We
will
definitely
create
a
another
pull
request
for
the
site
report.
Who
is
updated
version
of
the
PTF?
E
A
C
For
Vladimir,
do
you
have
a
guest,
a
guess
or
a
guesstimate
of
how
long
this
process
will
take
as
far
as
going
through
all
the
reviews
and
merges
of
the
of
the
other
other.
E
Yeah,
so
the
PDF
changes
are
already
approved,
yeah,
it
looks
like
yeah,
it
has
been
approved
right
just
15
minutes
ago,
so
yeah.
We
just
need
to
to
wait
for
the
merge
and
once
it
is
in
the
ETF,
so
we
can
then
create
a
pull
request
for
the
Sai
repo
right
to
update
the
PDF,
but
I'm
not
sure.
If
how
fast
this
pull
request
will
go
in
the
side
repo,
because
you
know
currently
the
side
repo
uses
some
specific
version
of
the
PTF.
You.
B
E
B
It
in
for
briefly
in
the
cycle
next
week,
or
so
so
after
that
it
might
put
this
so
to
be.
E
A
Did
we
want
to
talk
about
showing
233
to
Marian,
just
in
case
it
wasn't
seen
Anton
yeah.
G
Okay,
I
can
then
I
I
will
share
my
screen
so
like
yeah,
so
that's
the
case
within
boundary
option
scale,
but
actually
so
that
was
a
question
whether
it's
ternary
or
something
else
in
much
field
and
I
also
commented
that
in
fact
it
doesn't
work
either
way
and
I
provided
actually
a
link
to
the
some
repository
where
I
create
a
created.
The
cases
to
reproduce
the
issue
is
inbound
scenario
with
that
error
within
valid
match
type.
So
I
would
like
that
Marianne
take
a
look
at
it.
G
Take
a
look
at
the
test
case
and
the
BMV
to
model,
because
I
see
that
tests
may
pass
with
some
modification.
If
we
will
revert
the
spr
so
like
in
some
place,
we
have
a
box
so
Marianne,
please
take
a
look
and
give
your
feedback
whether
this
is
an
issue
in
the
BMW
2
and
the
much
type
for
inbound
routing
or
it's
something
else.
G
Okay,
so
like
then,
it
would
be
like
my
question
whether
we
can
in
some
ways
and
to
verify
inbound
routing
so
because
it's
yet
again,
so
we
back
to
the
discussion
that
bmv2
should
be
kind
of
a
standard
model
that
we
should
take
a
look
at.
But
like.
G
Probably
didn't
get
so
like
so
again,
could
you
please
explain
a
little
bit
so
that's.
F
Yeah,
so
the
error
you
see
when
you
load
the
pipeline,
it's
related
to
the
ACL
rule
invalid
match
time.
You
can
change
it
temporarily.
There
is
by
the
way,
a
defined
that
makes
it
compliant
with
the
current
capabilities
of
bmv2,
which
will
use
the
which
will
use
the
forgot
optional
mesh
type.
F
F
Yeah,
so
try
try
changing
the
mesh.
There
is
a
defined
in
ACL
file,
Dash
ACL.
There
is
a
defined
it
uses
native
to
bmv2
mesh
types.
It
will
provide
a
limited
functionality
for
the
ACL,
but
you
can
run
inbound
or
outbound
testing,
whichever.
G
G
And
one
month
they
opened
in
another
one
issue
so
because
I
wanted
to
just
clarify,
because
we
have
some
requirement
in
the
and
they
should
be
about
the
deletion.
So
and
I
would
like
to
understand
whether
maybe
I
misunderstand
this
requirement,
or
what
we
really
expect
is
that
if
we
are
deleting
some
eni
delete,
so
do
we
expect
the
total
mapping.
Mappings
mapped
to
the
dni
also
will
be
deleted
or
we
should
have
status
object
in
use
because
we
have
11
and
13
requirements
that
are
kind
of
conflicting
on.
In
my
opinion,.
F
Object
model
does
not
allow
you
to
delete
object
that
or
entry
sorry
object
that
has
references
to
it.
You
cannot
delete
that
object.
Id.
E
So
it
looks
like
they
suggest,
documentation
needs
to
be
updated
and.
B
H
Eni
the
mapping
is
associated
to
v-net,
so
that's
not
dependent
on
the
eni,
but
the
establishes
implementation
will
make
sure
that
all
the
route
tables
and
Associated
associations
are
removed
before
Enis.
H
Why
yeah
so
so
my
point
is:
there
is
already
a
already
a
check
in
the
establishes,
but
in
case,
if
it,
if
it
fails
or
if
there
is
a
bug
that
pass
on
to
psi.
Of
course,
the
PSI
can
return
object
in
use.
Yeah,
yeah
I
agree
with
tamarin.
H
Yeah,
okay,
the
same
between
it.
So
all
the
mappings
has
to
be
deleted
before
peanut
delete.
G
Those
there's
only
maybe
some
that's
minor
stuff
is
that
there
is
also
12.
That
means
that
we
can
delete
twice.
That
should
not
be
any
error,
but,
like
bmv2
actually
returns
several
the
second
delete,
but
let
me
verify
it
I.
If,
if
this
issue
really
like
his
present,
I
will
open
at
separate
box.
So
that's
one
is
okay.
So
thank
you.
A
D
B
It
is
the
Anton's
change
right:
the
docker
files,
the
new
directory
yeah.
D
C
C
Okay,
now,
let's
use
as
the
tag
to
build
the
image.
It's
also
used
as
a
tag
to
pull
the
image
in
the
subsequent
make
file,
so
they're
tied
together
now.
Another
possibility
which
I
thought
about
but
I
didn't
want
to
over
complicate.
This
is
to
have
separate
EMB
files
for
forcing
a
build
and
publish
versus
doing
the
whole
CI.
D
Yeah,
this
tag
is
that
the
date.
D
So
we
are
actually
can
you
know,
use
the
md5
checksum
right,
so
whatever
mechanisms
you
can
use,
so
you
can.
For
example,
we
can,
based
on
the
docker
file,
calculate
the
check
sum
of
that
Docker
file
and
put
that
as
a
tag
right.
So
this
can
be
added
by
the
build
system.
So
you
don't
have
to
update
this
tag
automatically.
C
That's
that's
like
I
said:
there's,
there's,
probably
a
lot
of
ways
to
do
this
if
you
want
it
totally
aligned
with
like
the
sonic
wave
appreciate
if
you
get
a
resource
to
help
on
this
and
kind
of
take
it
over
because
it
becomes
kind
of
a
full
halftime
job.
C
You
know
right
now:
it's
working-
maybe
it's
not
perfect,
but
I
don't
want
to
make
a
career
out
of
maintaining
this
personally.
So
if
someone
wants
to
get
some
help
on
this,
it
would
be
great.
C
That's
another
task,
too,
is
moving
to
Azure,
build
Runners,
which
we
got.
We
got
some
account
created
to
do
that
many
many
months
ago.
C
Okay
in
the
file
we
just
looked
at
so
let
to
describe
it
simply
this
EnV
file,
so
any
given
snapshot
of
the
dash
repo
specifies
not
only
the
docker
files
as
they're
built
and
potentially
publish,
but
also
as
they're
consumed.
So
it's
an
intact
thing,
everything's
defined
in
this
one
file.
So
there's
never
any
ambiguity.
Everything's
synchronized,
that's
one
thing:
I
was
striving
for
it.
It's
simple.
C
You're
transported
in
the
main,
no,
no
it's
in
the
main
make
file.
Maybe
I
should
share
my
screen
because.
C
It's
not
not
fair
to
have
Christina
be
trying
to
read
my
mind
yeah.
Let
me
I'll
I'll
show
you.
D
C
C
C
So,
like
I,
said
any
given
snapshot
of
the
dash
repo
you
can
tell
which
Docker
files
use
to
build
something
you
can
tell
which
tag
it
is,
and
you
can
see
how
it's
being
consumed
and
published
all
they're
all
synchronized.
So
there's
no
ambiguity
and
you
know,
there's
a
tag,
that's
hand
generated.
That's
a
date
code
is
one
way
a
a
Shaw.
256
is
another
I
kind
of
like
the
date
code
approach,
because
I
know
kind
of
the
order
in
which
things
were
created,
but
it's
arbitrary.
C
So
it's
just
the
way
I
started,
doing
it
for
convenience
and
I.
Think
the
main
Improvement
that
would
be
needed
would
be
on-demand
building.
So
if
it
cannot
pull
it
from
a
repo,
it
will
build
it
in
place
rather
than
fail.
C
And
and
to
understand
how
all
this
developed
right,
here's
an
example:
cycling.
C
If
you
go
back
to
last
summer,
some
of
these
build
flow
workflows
were
taking
over
an
hour
and
so
I
spent
a
considerable
amount
of
time
optimizing
into
six
or
eight
Docker
images
that
are
base
images
and
the
minimum
amount
of
rebuilding
of
only
the
Deltas
and
polling
whenever
possible,
and
it
turned
an
hour
plus
into
originally
the
original
change
left
three
minutes
and
it
changed
the
docker
files
from
12
gig
down
to
a
gig,
a
piece.
Something
like
that.
So
there's
a
significant
amount
of
optimization
that
was
done
to
improve
the
user
experience.
D
Should
I
see
we
kind
of
agree
on
you
know
the
approach
right,
so
you
know,
for
example,
we
we
we
do
the
mt5
check
some
of
the
of
the
docker
file
and
use
that
as
a
tag.
So
every
time
we
try
to
build,
we
try
to
pull
the
attack
from
the
ACR.
If
it
is
there,
then
we'll
put
right.
So
if
it's
not
there,
then
we'll
build
it
right.
So
in
this
case,
in
the
pr
build
because
it
didn't
upload,
so
there's
no
such
tag,
they
will
build
it
right.
D
So
once
you
build
it,
then
we
can
run
it
right.
So
we
don't
have
to
always
trying
to
pull
I
mean
we
will
try
to
pull.
But
you
know
if
it
doesn't
exist,
then
we'll
we'll
revert
back
to
the
build
right
so
and
then
in
the
in
the
normal
I
mean.
If
you
didn't
really
change
the
docker
file,
then
the
md5
checksum
will
be
the
same.
So
you
can
pull
it
and
you
don't
have
to
build
right.
So
then
this
will
work
for
for
both
PR
and
and
the
CI
right.
C
Yes,
yeah
so
I
think
you
know
it
can
be
shortened
to
we'll
pull
we'll
build
on
demand
when
necessary,
but
I
wanted
to
just
find
out
why
you
think
using
the
checksum
is
better
than
using
the
date
code.
Just
curious
about
your
opinion
on
that,
because
they're
just
their
job,
no.
D
Tech,
because
you
you,
you
know,
basically
you
want
to
detect
some
change
right.
You
cannot
really
use
a
date
code.
You
know.
Where
do
you
get
the
data?
Let
me
get
the
file
generation
date
time.
That's
not
reliable!.
C
D
D
C
F
A
C
That
would
be
awesome
that
would
be
awesome
and
inject
some
Sonic
DNA
into
this
would
be
a
good
thing
to
get
us
there.
You
know
closer
to
alignment.
A
Okay,
great
so
guys,
I
have
a
10
o'clock.
I
absolutely
have
to
make.
Is
there
anything
else
we
need
to
go
over
today?
Anything
burning.
A
A
Think
we
covered
a
lot
of
good
ground
today.
I,
don't
see
any
new
people
on
the
call,
so
I
really
don't
have
any
introductions
or
anyone
new
to
say
hello
to
so
that's
good
I
can
give
11
minutes
back
if
you'd
like
to
have
it
and
talk
to
you
next
week.
If
that's
okay,.
A
Work
for
you
guys:
okay,
all
right!
Well,
thanks
everyone
for
your
time
and
go
get
extra
coffee
or
wherever
you're
at
in
your
day,
thanks.