►
From YouTube: Intro to doc test framework
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Issues
all
right,
so
we
have
a
new
testing
framework
for
SEO
which
will
allow
you
to
I'm.
Allow
us
to
automate
the
testing
of
the
examples
on
the
website.
So
the
framework
lives
in
is
do
a
flash
test.
Flash
integration,
/sco
dot
I
do
I,
have
a
read
of
the
tests
and
if
there
is
a
Reba
here
that
will
demonstrate
how
to
actually
go
about
writing
tests.
C
So
this
meeting
is
kind
of
to
discuss
the
overall
framework
writing
a
test
as
well
as
answer
any
questions
that
you
have
for,
hopefully
writing
tests
for
this
tomorrow
during
the
testing
day.
Okay,
so
to
get
started.
Let
me
pull
up
an
example.
If
I
go
to
security,
MPLS
migration
test
go.
This
is
a
test
for
the
MPLS
migration,
which
is
task
security.
C
Mutual
TLS
migration.
Okay,
so
if
you
look
at
the
MPLS
migration
page,
you
start
with
a
variety
of
cute
cuddle
like
a
means
to
curl
command,
go
through
and
execute
a
variety
of
steps
on
the
command
line.
The
testing
framework
echoes
this.
So
if
you
look
at
the
m
TLS
migration
test,
you
can
see
that
there
are
various
inline
commands
here
following
an
order
from
the
iste
iĆ³
page.
C
To
write
a
test,
you
have
the
main
test
go
file
here
which
will
actually
create
a
environment
according
to
the
if
co-op
testing
framework,
and
once
that
environments
created,
you
can
go
back
to
the
MPLS
migration
or
whatever
tests
you
have
and
actually
go
about
writing
a
test.
So
we
have
a
test
function
here
with
a
framework
new
test.
This
is
all
set
up
right
here.
Now
we
go
to
run,
which
has
any
builder
and
then
various
steps
that
you
can
add.
So
you
call
add,
you
call,
is
duodai
Visio,
IO
dot
command.
C
Mom
you
can
do
in
line.
You
can
do
the
animal
scripts.
You
can
do
a
variety
of
different
things
which,
although
from
the
reason
yes
in
a
second,
you
can
stop
commands
on
top
of
each
other
and
assemble
as
you
normally
would,
when
you
would
run
through
the
example
and
at
the
end,
you
call
Bell
to
actually
run
your
test.
A
D
This
is
all
your
framework
is
actually
just
a
very
thin
wrapper
around
the
existing
test
framework.
So
you
can
see
that
everything
it's
framework
is
the
way
our
integration
tests
look
today.
This
was
really
nothing
different.
There
where's
this
where's
this
this,
this
builder
is
doing
nothing
more
than
building
the
test
function
that
gets
run
by
the
test
framework.
It's
literally
all
it's
doing
it's
just
it's
just
these
steps
that
it's
adding
are
just
literally
things
filling
out
a
function
that
it
will
eventually
be
run
by
the
test
framework
and
in
the
run
command.
D
So
this
there's
really
not
much
to
it.
It's
just
just
fundamentally,
just
you
know
when
that
function
gets
run,
it
actually
just
executes
all
these
ten
steps
in
order,
it's
pretty
straightforward,
yeah.
C
D
Oh
yeah,
could
we
actually
go
to
that
output
section
because
I
think
that's,
that's
probably
the
most
interesting
bit
to
see
what
it
word
is
we're
trying
to
do
here.
If
you
look
at
the
output
section,
so
what
what
a
test
generates
is
Meredith
yeah.
So
this
is
basically
what
a
test
generates.
These
snippets
are
actually
in
a
form,
that's
digestible
from
this
toi
out
directly.
D
Actually,
if
you
go
to
that
sto
I/o
syntax
link,
real
quick,
we
can
just
ya
know
them
like
what
we're
actually
doing
with
this
out
yep
that
time
of
useful,
yes,
so
snippets.
So
this
is
this
is
actually
like.
If
something
were
manually
offering
a
page,
they
could
actually
like
make
a
text
file
somewhere.
D
That
actually
has
these
snippets
defined,
and
then
you
can
actually
link
to
these
snippets
from
misty,
Ohio
pages
and
and
you'll
actually
just
generate
all
that
content
for
you,
it'll
it'll
highlight,
like
you
know,
cute
cuddle
commands
it'll,
it'll
change,
links
to
github
github
links
to
actual
links
when
it
actually
generates
the
HTML,
so
so
we're
actually
just
taking
advantage
of
these
distant
acts
here.
So
the
since
we
generate
are
just
easily
digestible
and
and
our
10
row
some
scraping
logic
from
it.
C
Okay,
so
let's
Nathan
just
said,
the
output
is
the
snippet
here
going
back
to
reading
actual
tests.
You've
got
a
variety
of
options
as
far
as
including
scripts.
You
can
ain't
dumb
and
that
that
I'm,
in
line
directly
in
your
go
file,
you
can
reference
a
path,
as
you
would
on
the
SDIO
website,
and
you
can
reference
paths
relative
to
book
and
go
itself.
C
B
So
I
have
a
couple
of
questions
and
the
first
one
is
from
the
perspective
of
someone
writing
a
new
task,
for
example.
But
are
we
expecting
folks
writing
a
new
task
to
do
in
order
to
create
a
test
for
DS
framework
like
what
would
that
flow?
Look
like
let's
say:
I
just
created
a
new
task.
I
have
several
commands.
I
have
a
couple
of
llamo
files
that
I
need
to
apply
with
that
flow
of,
like
alright.
C
So,
as
a
user
writing
the
documentation
you
come
in,
you
create
a
test,
let's
say
the
mpls
test
right
here.
You
don't
need
to
change
the
way
the
environments
deployed.
Then
you
don't
need
to
worry
about
the
test.
Amin.
You
just
come
in
and
write
your
test
here,
create
the
framework
new
test
run
and
then
you
add
all
of
your
individual
steps,
whether
those
are
gamal
scripts,
a
third
yeah,
yellow
files,
whether
those
are
scripts,
whatever
those
locks
acute
in
order
and
then
you
can
verify
each
of
the
steps
can.
E
E
C
E
E
B
E
B
F
E
Yeah
so
then
the
truth,
the
source
of
truth
here
is
going
to
be
tests
in
the
isseo
repo
and
then
once
in
a
while
to
synchronize
the
output
into
the
STR,
il
Divo.
Oh
there's
nothing
magic
here,
there's
just
going
to
be
a
text
file
checked
in
into
the
STI
own
repo
that
has
all
the
snippets.
So
if
you're
trying
to
coordinating
between
two
repos
you
can,
you
can
definitely
just
change
the
text
file
in
the
CIO
and
see
what
it
looks
like
in
the
end
of
the
tests.
Still
is
your
repo.
E
G
Have
a
quick
question:
what
about
the
process
like
the
document
process
that
require
additional
step
in
the
beginning?
Let's
say,
for
example,
looking
forward
has
a
first
step
that
would
be
I.
Don't
know,
I,
don't
remember
exactly,
but
that
would
be
configure
like
for
installation
installation
guide,
for
example,
so
you
would
have
a
mini
cube
or
whatever
other
installation
you
want.
How
do
you
do
that
so.
C
So
Aaron
I
hear
it
meant
us
okay,
so
when
yeah,
that's
kind
of
demonstrates
it
okay.
So
when
you
call
this
framework
that
new
suite
the
set
up
on
the
E
and
B
will
actually
set
up
your
environment
for
you,
with
whatever
components
you
need,
this
set
of
config
is
an
optional
parameter
that
allows
you
to
say:
hey,
you
know
what
I
need.
C
C
G
Because,
like
if
we
are
telling
this
process
to
the
community
testing
and
then
trying
to
replace
the
manual
testing
by
automatic
test,
I
am
I'm
just
like
pointing
to
the
the
question
that
is
going
to
come.
First,
which
is
how
do
I
make
sure
that
all
the
tests,
starting
with
the
installation
guide,
are
following
the
exact
same
path
and
are
we?
Are
we
testing,
like
all
the
different
kind
of
possibilities
like
including
mini
cubes
and
other
environment,
so.
D
So
yeah
so
I
guess
I,
guess
your
your
question
really
wasn't
so
much
about
setting
up
the
chat
rather
rather
actually
testing
it
testing
both
mini
cube
and
gke
options,
which
which
I'm
not
sure
we
necessarily
have
an
answer
for
right
now
we
we
have.
We
have
what
Brian
is
showing
right
now
we
have
this
option
to
actually
run
different
things
if
you're
configuring
for
mini
food
or
not
I'm,
not
sure
if,
if
our
jobs
actually
run
with
mini
cute
on
trial,
I
maybe
need
me
be
gone
or
somebody
earlier
with
what
we're
doing
it.
D
Pro
I
could
answer
that,
but
I
theoretically,
it's
possible
that
we
could
generate
documentation
for
both,
but
we
would
have
to
run
on
the
in
both
configurations.
Yeah.
E
A
D
D
E
It
be
worthwhile
to
have
some
generic
like
it
on
the
website.
We
have
some
fairly
generic.
Like
do
this
thing
first
and
then
here's
the
the
test.
We
want
to
capture
some
canonical,
here's
the
standard
way
to
prepare
for
an
SEO
I,
almost
purity
test
or
whatever,
so
that
everything
is
good.
It's
just
one
function
call
send
it
over
this
way
said
if
everybody's
set
up
the
same
way,
IIIi.
D
D
D
B
Have
another
question-
and
this
is
regarding
configurations:
a
lot
of
our
tests-
have
configurations
applied
in
the
cube
shuttle
command.
So
you
will
see
that
a
lot
that
you
have
a
bunch
of
jano
after
a
cubicle
command
with
all
the
configuration
values.
Are
we
with
the
new
framework?
Are
we
recommending
now
that
folks
put
those
configuration
in
separate
configuration
journal
files,
so
it's
easier
for
us
to
reuse
across
multiple
tests,
or
do
we
still
want
to
allow
people
having
configurations
applied
after
a
cube?
Carol
apply
command.
B
E
B
Question
is:
are
we
still
allowing
that
or
do
we
want
folks
to
move
towards
more
having
cube
car,
we'll
apply
these
back
configurations,
Jamel
and
then
having?
This
is
my
configuration
that
Jamel
stored
along
the
test
so
that
we
can
reduce
it
across
multiple
tests,
saying
four
configurations
that
we'll
use
or
four
commands
that
will
use
the
same
configuration
I'm
thinking
here,
of
applying
the
configuration
and
deleting
it,
for
example,
instead
of
having
to
have
this
huge
camel
both
times
that
you
just
have
to
get
the
the
the
camel
file
in
the
command
I.
D
C
H
C
H
H
D
I
think
I
think
built-in
to
any
relief
process.
If
it's
whatever
is
an
sto
io
would
be
for
the
current
release
or
something
or
potentially
slightly
newer,
I
think
I
think
that's
to
be
discriminant
exactly
what
was
showing
on
his
toi.
Oh,
there
was
thought
of,
like
maybe
we
show,
like
the
output
of
a
nightly
build,
for
example,
like
I,
think
that's
something
we
can
probably
discuss
in
the
working
groups
exactly
when
when
we
surface
this
dis,
TOI
o
but
yeah
I,
don't
think
I,
don't
think
users
care
yeah.
We.
H
H
B
H
B
Think
that's
a
conversation
worth
having
in
the
dogs
working
group.
There
are
things
that
we
can
do
like
badges,
for
example,
of
this
page,
is
automatically
tested.
This
page
was
manually,
tested,
I
think
there
are
certain
things
that
we
can
do,
but
I
think
that
again
goes
towards
the
dots
working
group
and
the
conversation
to
be
had
there
about
how
we
are
signaling
to
users
the
result
of
that
the
test
that
we
now
have
on
the
framework.
Like
that's
a
conversation
we
couldn't
have
until
the
framework
was
in
place.