►
From YouTube: 20200128 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
This
allows
us
to
separate
the
reviews
of
what
the
things
should
do
versus
the
tests
that
evaluate
whether
it
does
it
and
reviewers
for
those
Glam
would
be
to
have
the
and
and
creation
of
that,
maybe
by
the
next
meeting
here
and
then
start
working
on
migrating
or
defining
all
the
behaviors,
such
that
our
existing
tennis
I'll
cover
those
and
eventually
move
to
having
a
CI
job
that
validates
the
current
percentage
rather
than
using
end
points
using
coverage
of
behavior.
So
all.
B
A
Like
I
said
in
the
next
couple
of
weeks,
we
should
get
the
end
end
example
done
and
merged,
I
hope
and
and
then
we'll
work
forward
on
increasing
the
coverage
and
keep
both
systems
running
in
parallel,
at
least
until
until
we've
converted
on
that.
At
that
point,
we'll
be
prepared
to
talk
to
SIG's
and
start
to
solicit
them
to
define
their
behaviors.
It
should
be
clear
enough
what
they
need
to
do
so
any
questions
on
that
before
we
move
to
hippies
agenda.
B
D
Yeah
I
just
actually
pushed
to
that
I
thought.
The
other
day
was
gonna,
go
green
when
I
rebased,
but
it
turned
out.
There
were
some
other
changes
in
some
of
the
text,
and
so
when
I
rebased,
like
my
documents,
didn't
match,
I
took
too
long
to
figure
that
out,
but
I
just
found
it
and
updated
the
document
so
I'm
hoping
that
should
be
good
to
go
now.
We'll
see:
okay,
good,
okay,
I'll
keep
an
eye
on
that.
A
A
A
E
B
F
B
B
C
C
I've
got
dig
into
it.
Yeah
I,
certainly
I,
have
noticed
that
a
lot
of
our
scheduling
tests
are
just
very
fragile.
Two
different
configurations,
I,
don't
know
how
many
people
doing
conformance
have
complained
about
it.
It
just
like
whenever
I
see
someone
like
oh
I'll,
just
work
around
this
by
hacking.
The
cluster
I'm
like
well
the
test
problem
within
so
this
one
just
triggered
and
I've
asked
that
team.
Just
they
focused
on
making
sure
that
when
they
do
fix
scheduler
tests,
they
do
it.
For
you
know
everyone,
okay,.
G
G
I'm
gonna
try
to
share
my
screen,
so
I've
got
a
couple
of
things
on
the
agenda.
The
first
one
is
kind
of
dumbing,
a
bit
of
our
initial
part
of
our
test.
Writing
workflow
on
how
to
get
working
with
ApS
new
process.
Writing
and
the
second
is
we
had.
We
were
going
to
bring
up
a
pathway
of
tunes,
yeah
IO
progress
but
I'm
having
some
trouble
this
morning
with
that
and
we
on
the
CNC
FCI
on
the
prop
the
proud
progress
drive.
G
We
now
have
images
getting
created,
but
we'll
be
working
on
that
bit
more
this
next
week
on
the
test.
Writing
progress.
We've
got
different
things
that
need
to
be
promoted
other
on
the
board,
but
I
we've
got
the
5
end
points
that
were
increased.
We've
got
one
Senate
test:
writing
prior
I'm,
not
sharing
my
screen,
yet
am
I
sure
performance
meeting.
G
G
G
Our
our
flow
that
we
have
been
working
on
here-
this
is
where
we
just
use
kind
and
deploy
net
Ben
token
and
then
deploy
our
our
Kubb
max
deployment,
and
we've
opted
for
right
now
for
Topher
today
to
do
our
cluster
setup,
which
is
a
rent
Otto
cluster,
allows
us
to
connect
various
pieces
together.
So
when
we
have
source
blocks
like
this,
though
well
love
to
create
a
second
die
over
here.
So
here's
our
teammate
attached
I
always
have
to
treat
Windows
and
we'll
create
a
kind
cluster.
G
D
G
G
G
This
pics
are
within
our
repo.
It
takes
our
components
that
are
part
of
API
soup
and
build
the
images
and
push
them
to
the
cluster
based
on
it.
Config.
If
we
look
at
our,
we
have
a
special
file
in
our
repo,
that's
a
tilt
file
and
it
just
references
our
deployment,
yamo
and
based
on
the
name
of
the
images
on
the
left
under
the
docker,
build
areas
on
line
numbers,
8
and
9
will
deploy
our
web
app,
which
is
the
will
be
the
API
Oh
site
and
how
we
compare.
G
Weather
coverage
increased
for
a
particular
test
that
we're
writing
sewer,
which
is
an
interface
to
Postgres
that
creates
all
of
the
tables
and
data
manipulation
and
we're
not
building
our
Postgres
for
just
using
route
streamer.
G
So
we
commented
out
if
you
want
to
turn
it
off
or
on,
and
currently
this
this
deployments
missing
the
audit
logger,
because
we
don't
want
to
use
the
audit
logger
unless
we're
actually
writing
a
test,
because
we
don't
care
about
receiving
events
from
via
the
audit
sink,
and
the
reason
that
we
use
tilt
is
that,
while
we're
working
on
anything
within
the
the
apps,
the?
If
we
touch
any
of
those
files,
it
will
Auto
rebuild
the
app
and
push
it
into
the
cluster,
to
tease
and
in
creating
it
them
I.
G
G
So
you
can
look
at
the
build
logs,
the
runtime
logs
for
the
various
pods,
so
we
can
see
that
in
our
web,
app
as
they've
been
able
to
connect
the
TV
yeah
posture
is
applying
migrations
which
takes
an
empty
Postgres
server
and
applies
all
of
the
API
snoop
SQL
and
creates
all
the
tables
and
one
of
the
things
that
it
does
is
pull
down.
The
latest
successful
job
build
and
creates
the
index
that
we
need
to
see
what
our
coverage
is.
So
that's
running
running
right
now,.
G
G
This
is
on
the
kind
level
that
containers
that
are
running
if
we
go
over
to
our
it's
our
mock
template.
This
is
how
we
would
normally
go
through
and
verify
that
our
different
components
are
running
so
there's
a
sewer
which
we're
waiting
for
it
to
finish
its
deployments
and
what
we're
looking
for.
Eventually,
this
is
just
starting
a
sequel
connection.
G
G
Think
what
I'm
my
goal
here
to
see
if
anybody
else
can
get
a
chance
to
run
this
as
well,
we
can
look
at
improving
the
flow
of
choosing
what
to
write
and
they.
A
G
And
then
once
the
ticket
is
approved.
Then
we
have
this
verified
agreed
on
set
of
tests
to
write
and
then
those
test
definitions
are
turned
into
PRS,
which,
because
the
way
that
we
write
them
in
our
our
literature,
we
have
all
the
documentation
there.
It's
a
mostly
turning
those
weld
well-defined
tickets
into
a
PAPR
itself,
just
by
polishing
it
up
and
getting
it
through
the
normal
PR
process.
B
G
So
if
I
go
to
to
a
local
web
page
because
I'm
running
tilted
forwards
all
the
ports,
so
when
I
go
to
port
there,
you
know
what
is
this
103
50?
It
has
all
of
our
our
pods
deployed,
and
one
of
those
pods
is
our.
Is
our
web
app
and
it
has
a
port
forwarding
for
that,
so
it
actually
brings
up
a
snoop
interface.
G
F
G
So
tiny
will
make
it
much
much
bigger,
so
we
could
go
through
here,
for
example,
and
select
endpoint
coverage
limit
where
and
we're
tested
equals
false
and
we're
equals
at
the
core
right
stable.
Let's
try
it
so
we
want
stable
and
tested
just
like
the
way
to
query
via
the
UI
say:
here's
a
bunch
of.
Maybe
we
don't
want
all
those
responses.
When
the
query
would
how
do
we
not
select
everything
from.
G
Just
selects
we
just
do
a
bucket,
whether
it's
because
it's
widows
test
or
not,
so
we
really
just
want
to
know
the
employment
right
operation
ID.
So
what
operation
IDs
we're
a
tested
and
yeah?
That's
it
right
so
that
list
over
here
of
operation,
IDs
are
possible
ones
for
stable
endpoints
that
need
to
be
tested,
and
if
we
go
back
is
just.
G
And,
of
course,
Postgres
is
running
so
you
could
run.
You
know,
connect
directly
to
two
Postgres
and
that's
usually
what
we
end
up
doing
is
the
Postgres
queries
directly,
not
not
with
the
UI
wrapping
it
yeah,
so
I'll
go
I'll,
go
back
to
that.
The
UI
is
just
useful
for
for
some
folks,
so
I'm
going
to
move
the
the
tilt
deployment
out
of
the
way
just
to
give
us
some
more
screen
room
and
fullscreen
this
side.
G
So
you
can
say
the
API
operation
material
there's
a
view
into
that,
and
it's
got
details
on
every
operation
ID
taken
from
the
open,
API,
spec
and
then
the
parameter
material
is
for
each
parameter,
because
if
we
wanted
to
look
at
all
the
parameters
of
the
api's
as
far
as
the
at
that
level,
because
if
we
looked
at
exploring
that
in
the
past,
there's
the
audit
event
material,
you
can
see
that
the
audit
of
it
materials
870
minutes
wow.
That's
it's
usually
on
400.
G
This
must
have
been
a
really
heavy
audit
event
job
and
then
bucket
job
swagger.
So
the
all
of
the
each
bucket
we've
only
got
one
that
we
look
at
upstream
and
each
job,
primarily
we
load
one
by
the
fall.
It
has
a
particular
get
version
of
the
open,
API
swagger
JSON,
and
this
is
loaded
into
that
table.
So
we
can
see,
we've
got
just
under
or
just
over
three
Meg
their
views.
Don't
take
any
data
because
they're
just
views
into
the
material,
so
our
endpoint
coverage
material.
G
Is
that
the
view
that
we're
working
on
to
say
what
is
our
endpoint
coverage
and
it's
just
a
materialized
view
of
other
queries
into
the
system
and
again
the
comments
kind
of
help
you
figure
out
what
what
tables
are
there?
The
the
end
points
hit
by
new
test
view
is
when
we're
writing
a
test,
and
we
are
doing
the
example
to
create
a
to
triage
ticket
to
say
this
is
the
increase.
G
The
raw
audit
events
themselves
are
flowing
into
that's
like
the
rest
of
it
is
run
four
hundred
Meg
for
the
raw
out
of
it,
but
are
we
processed
the
data
into
audit
of
it?
Material
is
around
nine
hundred
bags.
That's
why
it's
taking
well
anyway.
So
that's
the
the
queries-
and
this
is
our
I
think
we
are
still
looking
at
to
make
sure
this
is
the
number
of
endpoints
that
are
there.
So
two
distinct
operations
where
it's
not
the
alive
data
and
I
think
this
flow.
G
We
may
need
to
work
on
a
little
bit,
but
this
is
identify
untested
feature
using
api's
new,
so
we
could
modify
this
and
we
usually
do
for
when
we're
ready
to
write
a
test
so
k,
comma,
comma
here.
This
is
a
list
of
the
path
that
the
operation
ID
and
the
path
and
the
description
where
stuff
night,
where
the
path
doesn't
include
the
word
volume.
Real
simple
query
ordered
by
the
operation
idea,
so
it's
an
alphabetical
order,
reverse
it.
So
we
start
at
the
bottom
and
limit
it
by
25.
G
This
helps
us
choose
a
particular
particular
end
point.
If
we
go
through
our
existing
test
anytime,
we
create
a
triage
operation
ID.
It
includes
why
we
chose
that
particular
end
point
and
the
query
we
did
to
choose
it
then
the
like,
if
you
were
to
use
this
as
a
template
for
creating
a
ticket,
you
would
go
through
and
find
the
documentation
for
the
particular
endpoint
or
set
of
n
points.
G
You
wanted
to
write
a
test
for
and
that
documentation
all
goes
in
here,
because
you
see
the
little
export
the
brown
export,
that's
stuff
that
gets
exported
into
the
ticket
that
we
create
to
get
approved
by
this
by
this
working
group,
then
we
kind
of
go
into
mock
tests.
So
before
we
do,
the
mock
test.
I
want
to
spin
up
our
n
cluster.
G
Audit
sync,
so
I'm
gonna
go
back
over
to
our
tilt
file
and
I
might
unmarked
sum,
as
you
can
see
Devin,
so
I'm
going
to
uncomment
the
audit
logger
so
well,
uncomment
that
and
then
we'll
also
uncomment
the
audit
sync.
So
if
I
do
this
and
I
save
it,
this
is
one
thing
that
we've
enjoyed
around
tilt.
Is
it
auto
deploys
based
on
just
saving
this
file,
so
that
will
now
go
through
and
deploy
the
audit
logger
build
the
auto
logger
from
source
and
tell
the
kubernetes
api
server
by
the
way?
G
Please
send
all
your
audit
events
to
this
in
cluster
audit
sync,
so
we'll
kind
of
just
wait
a
moment
for
that
two
to
go.
I!
Think
it's
up.
So
if
we
go
through
to
look
at
this
particular
mock
test
is
great.
We
decide
to
look
at
it.
Config
met
with
a
static
label.
We
just
kind
of
put
an
outline
in
text
so
that
without
reading
the
code
we
can
decide
before,
is
this
a
good
test
to
write
or
not?
G
And
then
we
have
a
simple
example
and
go
here
that
reads:
the
coop
config
from
a
file
does
a
live
test
so
and
the
way
that
we
write
it
is
because
of
our
literate
programming.
We
don't
write
the
whole,
we
just
put
the
snippet
that's
most
important
and
then
it
runs
it
and
it
just
said:
hey,
there's
16
pods
in
the
cluster
there.
G
G
So
if
we
go
past
here
and
look
like
verify
that
it
has
increased
our
coverage,
so
first
thing
to
do
is
make
sure
that
we
find
our
user
agents
so
I
think
we
probably
need
to
wait
30
seconds
or
so
the
API
server
doesn't
continuously
stream
every
audit
event
every
second,
that
it
happens.
It
does
it
in
batches
of
around
30
seconds.
So
if
we
wait
about
30
seconds
or
so
we
should
be
able
to
query
this
and
and
get
the
that
you
think
we
set
the
user
agent
to
live
testing
inside
the
the
query.
G
G
All
right,
I'm,
gonna,
chalk,
that
up
to
trying
through
a
live
demo.
This
query
here
would
show
endpoints
hit.
So
we
say
what
what
pieces
of
the
of
the
API
were
hit
by
this
snippet
of
code
that
we're
about
to
paste
into
a
ticket,
and
it
would
show
the
operation
ID
whether
it
was
hit
by
a
needy
test
and
what
was
hit
by
the
new
test.
G
You
can
tell
if
it
was
hit
by
ete
at
all,
whether
it
was
conformance
and
and
that
it
was
hit
by
the
new
ete
test,
and
this
important
one
here
you
can
see
like
previously,
we
had
this
one
is
going
to
increase
coverage
is
not
is
not
not
working
at
the
moment
and
then
our
final
notes
are
because
we
want
the
ticket
to
have
this
information.
We
just
put
it
in
here
to
export,
please
pack
or
sig
testing
sig
architectures
at
the
moment.
G
Yeah,
that's
kind
of
it
for
the
flow
that
we're
using
to
write
the
test
and
I'm.
Sorry
that
the
query
didn't
work
out
of
the
box
I'd
intended
to
show
something
else
today,
also
export
yeah.
Oh
that's
right!
So
I'm
I
don't
know
if
anybody
wants
to
for
fun
either.
Click
on
the
URL
that
I
shared
inside
to
everyone
in
zoom
group
chat
or
paste
into
I
term
2
or
is
equivalent
the
whole
SSH
grid
at
whatever.
But
the
idea
is
now
that
we're
done
with
the
ticket.
G
We've
done
this
work
together,
everybody
should
be
able
to
easily
take
it
out
of
the
cluster
or
out
of
this
little
you
know
shared
environment
and
create
a
ticket.
So
we
have
a
set
of
things.
We
change
to
do
that
and
everybody's
as
connecting
can
can
drive
the
session
as
well.
But
what
the
key
trick
we
hit
to
kind
of
get
out
and
get
this
out
here
is
kama
EE
to
export
it,
and
then
we
we
export
it
to
github.
G
Flavored
markdown
is
one
second,
so
we'll
do
this
I
had
something
selected,
so
comma
e
github,
flavored,
markdown
temporary
buffer.
So
this
whole
thing
is
the
in
markdown
format
for
what
we
use
to
create
tickets,
the
export
of
this
session-
and
it
says
we
deployed
API
snoop
here
with
the
query
that
we
use
to
choose
our
endpoints
here
with
the
endpoint
that
we
focused
on
make
this
full
screen
and
maybe
full
screen
like
that.
But
somebody
else
is
connected
to
a
screen
so
filling
in
the
show,
the
smallest
size
and
then
our
go
code.
G
G
Yeah
this
is
the
flow
to
make
a
ticket
and
then
once
the
tickets
approved,
we
use
a
very
similar
approach
to
write
the
ticket.
But
it's
more
the
standard,
how
everybody
does
everything
you
and
we
we
use
our
will.
You
do
use
this
style,
but
it
doesn't
matter
because
the
thing
is
getting
agreement
and
quick
buy
off
in
a
way
that
everybody
understands
why
I.
B
I
certainly
feel
like
the
ticket
gives
me
sufficient
info
to
review.
What's
going
on,
I
guess.
The
other
question
is:
when
can
we
expect
to
have
more
tickets
to
to
triage
and
I
still
kind
of
go
back
to
I
feel
like
this
group
could
benefit
from
a
report
that
shows
us
what
the
progress
you
know
just
what
the
progress
has
been
in
endpoint
coverage
since
the
start
of
the
year
or
since
the
last
release
of
kubernetes
yeah.
D
G
Part
of
the
the
rewrites
that
were
probably
like
a
week
or
two
away
from
his
is
showing
that
over
time
graph-
and
we
had
some
like
I-
was
going
to
show
that
today
in
the
in
the
in
this
demo,
but
the
numbers
I
didn't
agree
with
them.
So
I
wanted
to
go
back
and
really
understand
to
give
you
an
idea.
It
said
45
percent
performance
coverage,
which
that
was
great
but
I,
want
to
really
know
that.
B
G
Think
that's
all
from
from
my
team,
I
think
I
feel
like
we're
doing
pretty
good
in
creating
tickets.
We
the
this
last
week
when
we
made
sure
the
ertharin
closed
once
we
had
open
but
needed
that
because
we
lost
so
we
had
that
we've
lost
Devon,
so
I'm
having
to
pick
up
the
slack
and
refocused
the
those
were
serving
with
a
bit
more
effort.
G
So
I
sticking
a
bit
to
to
ramp
it
up,
but
I
think
we're
we're
at
a
place
with
this
flow
with
it
I'm
I'm
happy
that
we
can
get
the
velocity
and
keep
it
there's
five.
There
are
five
in
points
that
have
just
you
know,
merged
this
last
two
weeks
and
then
I
think
we
have
another
five
to
seven
in
the
in
need
to
merge
and
we've
got
several
in
the
in
progress.
A
All
right,
thanks
happy
is
there.
G
E
B
A
That
are
privileged
to
grandfathered
in
purchase
things
by
the
way
right
now
I
believe,
but
we
we
probably
need
to.
We
need
this
program
to
be
able
to
handle
more
functionality
than
the
sort
of
scope
we've
given
it
so
far
and
though
I
think
the
mechanism
for
that
is
going
to
be
those
profiles.
I
think
probably
we
need
to
resuscitate
that
effort
and
start
to
you
potentially
divide
things
that
are
privileged
things.
That
I
mean
what
we
have
to
decide
decide
how
we
want
to
think
about
it.
A
One
of
the
ways
to
think
about
it
is
sort
of
larger
use
cases
like
or
particular
ways
in
which
you
want
to
build
out
a
cluster.
Like
you
know,
multi-tenant
cluster,
like
I,
think,
opens
online
multi-channel
clusters.
There's,
probably
certain
functions
are
unavailable
to
the
user,
but
we'd
want
to
be
able
to
say
that
from
a
user
perspective,
we're
still
it's
conforming
in
some
way,
but
then
for
clusters,
where
there
is
an
administrator
that
has
access
that
administrative
function
also
needs
to
be
conforming.
A
So
almost
like
conforming
in
those
different
profiles
of
user
versus
administrator,
that's
one
way
to
cut
up
profiles.
I'm
not
sure
if
it's
the
right
way
but
I
think
we
need
to
the
the
privileged
kind
of
falls
into
that
thing.
That
there's
going
to
be
this
set
of
use
cases
or
users
or
workloads
that
that
require
that
and
those
those
customers
still
want
to
be
able
to
know
that
that
those
workloads
are
portable
across
different
vendors.
So
I
think
we
have
to
address
that
somehow,
and
you
know
I.
C
Can
form
us
working
group
like
if
we
can't
figure
out
a
way
to
subdivide
these
two
very
clear
use
cases
which
is
user
workloads,
an
administrative
extension
of
a
kubernetes
cluster
CR
DS?
Yes,
I
CRI
then
like
we
can't
accomplish
the
goals
of
making
sure
that
what
people
are
asking
for
out
of
the
conformance
program,
yeah.
A
A
Think
that's
the
next
thing.
We
need
a
tackle.
I.
Think
the
behavior
stuff
kind
of
gives
us
an
avenue
for
classifying
behaviors
into
those
two
different
profiles
or
categories,
without
necessarily
with
a
little
more
granularity
than
than
the
individual
tests
do
so
I
think
that
will
be
helpful
in
that
regard
as
well.
B
D
A
All
right,
well,
yeah.
My
expectation
would
be
that
the
metadata
associated
with
profiles
is
going
to
get
attached
to
the
behaviors,
not
to
ginkgo
tags,
and
then
you
know
it's
still
gonna
require
likely
us
to.
We
might
even
have
to
break
apart
certain
tests,
and
things
like
that,
but
we'll
have
to
see
where
we
go
with
it.