►
From YouTube: 20200225 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
So
we
did
have
we
could
we
were.
There
was
a
junction
hockey
and
a
PR
out
there
for
a
long
time
that
wouldn't
merge
because
of
all
these
flakes
and
Jeffery
just
took
it
and
rebased
it
with
John's
permission
and
I
approved
it
because
it
had
already
been
approved
so
I
haven't,
checked.
Good
github
was
having
some
issues
earlier
today,
so
I
don't
know
if
that's
merged
or
yet
not
I
haven't
looked,
but
probably
not
yet,
but
anyway
that
that
and
then
Jeffrey
I
think
here's
some
additional
things
Jeff.
A
Do
you
have
any
comments
on
that?
We're
we're
making
progress?
It's
it's
it's
coming
along,
but
it's
the
next
step
is
probably
for
me
to
sit
down
and
do
another
pass.
It
kind
of
trying
to
come
up
with
enough
I
hate
years
to
cover
all
the
existing
tests
plus-
and
so
that's
gonna,
be
take
me
another
couple
weeks
and
then
I
also
along
the
lines
of
that
want
to
put
together
a
cap.
A
Finally,
for
the
profiles
that
you
know
a
couple
years
ago,
literally,
this
was
discussed
and
I
think
there
was
some
work
on
it,
but
what
I'll
do
is
take
that
and
adapt
it
to
what
we
have
now
for
profiles
and
some
ideas
that
a
few
of
us
here
at
least
have
talked
about,
and
we
can
then
review
it
in
two
weeks.
I
think
at
this
meeting.
B
Glad
to
hear
an
update
on
the
cap
on
the
we
were
hoping
to
have
the
coverage
for
the
last
year
as
a
dynamically
loaded
web
thing,
and
we,
as
we
tried
to
load
all
the
data.
We
found
some
technical
debt
and
it's
it's.
The
first
thing
on
the
agenda
or
after
the
behavior
driven
plan
was
the
proxy
and
all
the
extra
parameters
ago
after
proxy.
It
was
it's
hazy.
In
my
mind,
I
mean
I,
didn't
have
a
chance
to
go.
B
Look
it
up,
but
we
made
a
call
on
on
what
to
do
with
proxy
and
most
of
the
places
in
API
Snooper.
We
deal
with
it.
We
muted
it,
however,
it
as
I'm
revisiting
it,
because
115
data
sets
wouldn't
load.
I
realized
that
we
should
probably
should
include
those,
and
it
probably
will
change
coverage
and
to
a
positive
extent,
but
we
won't
get
to
see
that
for
another
few
hours,
because
the
data
sets
once
we
start
loading
them
in
there
to
do
our
materialized
views
takes
time,
and
we
found
that
late
yesterday,
I.
B
B
Not
and
before
you
were
here,
I
was
I,
may
reach
out
to
to
Aaron
and
see,
but
we
had
the
the
proxy
parameters
are
handled
differently
with
regards
to
operations
and
that's
where
all
focus
was
on
operations
at
that
point
and
what
happened
was
when
we
found
something
with
the
proxy.
The
logic
was:
don't
worry
about
the
proxy
countered
as
a
hit
towards
the
original
endpoints,
whether
that's
accurate
or
not.
It's
this.
The
the
question
I
had
looking
at
it
now.
I
feel
it's
inaccurate.
B
B
B
That
will
likely
be
later
today
because
of
the
technical
data
around
the
proxy
extra
farms.
The
short
version
is
I'm
going
to
a
unless
there's
the
disagreement
will
go
ahead
and
enable
that,
as
far
as
coverage
and
see
how
that
affects
our
numbers
to
make
it
make
sense,
we
should
calculate
coverage
the
same
way
any
time
we
change
it
all
the
way
back
to
our
beginning
numbers,
which
is
why
having
the
raw
data
around
is
useful.
B
That's
for
the
de
pais
new
side
of
it
and
the
next
parts.
It's
been
simply:
we've
we've
had
a
summer
test,
writers
come
and
go,
but
the
flow
has
gotten
pretty
good
where
oh
and
I'm
asking
for
help
for
other
people
are
helping.
Other
people
move
stuff
along
I've
had
really
good
feedback
and
I've
wanted
a
chance
to
kind
of
do
that
on
the
call.
B
So
we
can
see
how
how
the
API,
snoop
and
I
team
are
flowing
to
create
those
things
so
I'm
going
to
share
my
I
try
to
share
my
desktop
and
when
I
hit
share.
Let
me
know
if
you
see
my
screen,
so
this
is
going
to
be
a
demo
of
the
a
workflow
that
we've
adopted
internally,
so
I'm
going
to
pull
off
a
browser
tab
as
well
and
drop
this
into
the
chat.
B
So
if
I
chance,
this
should
go
to
everyone
in
the
channel
and
I
will
bring
that
over
to
my
screen
now
so
I'm
just
doing
this
in
a
couple
of
terminals
in
a
website,
I
could
copy
paste
this
from
the
web
into
a
file.
Although
the
files
are
already
there
to
make
it
a
little
simpler
and
more
clear.
I
have
two
files
in
a
temp
folder,
and
one
of
them
is
the
test
writing
environment,
and
this
goes
through
and
pulls
out,
API,
snoop
and
eventually
might
pull
out.
B
It
will
pulls
out
kubernetes
when
you
start
writing
tests,
but
that's
just
for
this
demo
we'll
simplify
it.
This
allows
anybody
who
wants
to
get
involved,
writing
tests
or
working
with
IP.
Is
there
really
any
kubernetes
project
to
start
from
just
a
simple
env
file
and
and
make
it
runs
our
long
as
docker
works?
It
should
be
fine.
The
other
part
is
our
test.
Writing
shell
and
that
eats
the
ENB
file.
Then
does
a
docker
run?
You
can
see
at
this
some.
B
Some
things
around
running
is
privileged
and
host
and
a
few
other
things
that
we
can
I
slowly
get
rid
of
and
move
to
other
things,
but
for
now
this
is
a
simple
way
for
us
to
quickly
get
started.
Usually
we
run
this
on
on
a
on
a
temporary
boxer
or
box,
but
that
also
runs
fine
on
here
on
your
laptops,
whatnot,
any
questions
so
far
before
anything.
B
B
Want
to
make
the
whole
point
is
to
make
this
other
people
can
can
watch
this
later.
So
if
I
quit
back
out
and
I'll
just
Kathy
is
real
quick.
The
env
file
is
super
simple
and
fits
on
page,
and
so
is
the
shell
script.
You
could
probably
copy/paste
that
all
from
the
web
and
we're
gonna
run
just
bash
on
our
test.
Writing
shell
script
and
it's
going
to
use
something.
We've
decided
to
call
Kubb
max
it's
macros
for
kubernetes
and
for
working
and
collaborating
together.
B
It
uses
kind
underneath
and
it
because
we
push
in
full
code.
You
want
to
set
what
your
email
and
your
name
is.
There's
a
time
zones
all
over.
The
place.
Docker
host
is
necessary
if
you're
on
a
Windows
or
a
Mac,
so
that
it
can
communicate
through
the
docker
host
there.
Otherwise
it
spins
up
it
can
spin
up
at
local
registry
which
helps
speed
up
development
of
all
the
things
and
then
some
stuff
around
the
namespace
and
folders
to
check
out
so.
C
B
You
see
that
it
says
coupe
Mac
in
it
default
repos
folder,
so
I'm
gonna
go
ahead
and
just
set
that
real,
quick.
B
And
it
you
know:
I
set
the
false
I.
Guess
we
could
something
doesn't
work.
I
might
might
be
easier
for
me
to
leave
it
on
just
in
case
a
it
doesn't
have
to
be
as
robust
as
it
is,
but
you
can
see
underneath
it
is
mounting
your
coop
config
folder,
making
sure
that
it
can
touch
the
coop
config
file
and
you
could
use
a
different
than
big
bow.
Creates
the
cluster
based
on
something
inside
I
know
a
cluster
plus
a
registry,
and
that
will
take
about
a
minute
to
come
up.
B
B
B
B
B
C
B
I'm
gonna
paste
I
pasted
this
link
into
the
chat
and
you
can
either
drop
that
chat
drop
that
into
your
terminal.
I
term.
2
works
great
on
the
OS
X
Microsoft
terminal
would
work
well
soon
if
they
emerge
in
support
for
copy
and
pasting
and
then
on
on
linux,
x-terms
about
the
one
that
works
the
best
for
now
so
I'll
give
everybody
a
chance
to
to
come
and
take
a
look
together.
We're
all
pairing
and
sharing
and
doing
this
at
the
same
time
together.
B
I
suspect
that
first
one
is
Caleb
connecting
yeah
and
if
anybody
else
wants
to
connect
as
well
to
clients
their
egos
and
someone
else
is
joined
and
we'll
go
ahead
and
continue
when
you
rejoin,
don't
do
it
from
a
phone
just
yet
because
it
will
work,
but
it
will
make
the
screen
really
small
so
back
to
our
our
file.
So
we
ran
one
command
and
it
got
us
into
this
test
writing
file
and
it
populated
our
clipboard
so
that
we
could
work
together
with
each
other.
B
B
But
if
you
go
down
to
here,
we
can
choose
to
use
a
different
IP,
so
this
is
dynamically
going
out
and
finding
my
IP
and
updating
our
our
flow
so
that
we
have
different
URLs
to
go
to
this
is
if
you're,
not
with
IR.
You
want
to
use
your
own
domain.
That's
fine,
too.
I
have
a
host
that
we're
on
right.
Now,
that's
provided
my
packet
and
so
I
just
put
everything
bla
bla,
bla,
HHI,
dot.
Coop
goes
to
this
computer.
B
B
B
B
And
actually
run
Piltdown
rope.
There's
one
other
thing
that
I
wanted
to
do
for
this
to
work
and
that
was
to
weaken,
set
passwords
on
all
this,
so
they
were
not
just
exposing
it
to
the
world,
so
we'll
set
a
password
of
CN,
CF
and
API
snow,
and
so
once
this
comes
back
up,
we'll
well
we'll
take
those
steps.
B
The
output
of
this
is
for
infer
inside
of
the
an
API
snoop
team
is
these
tickets
and
these
tickets
are
based
on
the
endpoints
that
we
work
on
in
these
files.
Kind
of
drive
through
helping
us
choose
the
test
to
work
on.
So
it's
any
part
of
this
workflow
you
want
to
modify
or
anything
everybody
can
feel
free
to
do
so.
B
B
B
B
The
PRS
themselves,
let's
actually
issue
the
issue,
this,
the
one-
this
is
a
pas
new
PR,
so
the
API
Snoopy
airs
include
the
markdown
and
the
org
file
for
that,
so
that
we
have
in
raw
markdown
coming
from
the
org
file,
the
endpoints
that
we're
going
to
focus
on
links
to
the
documentation,
the
mock
test
and
the
outlining
go-
and
this
is
a
pretty
big
test.
So
it
may
take
us
a
bit
through
the
event
and
here's
the
the
output
of
this
mock
test.
B
It
doesn't
use
anything
specific
to
the
testing
framework,
it's
just
kind
of
wrong
billing
as
an
example
of
what
what
it
will
look
like,
and
this
is
the
ensuring
that
our
live
test
hit
it
and
that
these
are
the
they
are
hit
by
the
new
test
for
sure
and
the
increase
in
number.
That's
how
we
calculate
that
this
is
if
we,
this
raw
file
is
taken
and
pasted
into
the
new
ticket.
So
this
is
our
new
ticket,
that's
inside
conformance.
So
these
are
the
three
to
triage
tickets
that
have
come
from
this
work.
B
B
The
docker
build
section
of
the
till
file
and
that
will
use
the
folder
specified
at
the
end
of
line
eight
there
to
override
the
image.
So
our
has
sir
images
how
with
your
database
and
then
there
was
update
that
we
needed
today,
I'll
go
ahead
and
run
fill
up
again
with
that
uncommented
and
we
can
see
in
our
our
build
log
for
us
or
that
actually
got
built
and
in
our
remote
our
chat
area.
At
me,
as
I
said.
B
Iii
by
default
and
we'll
just
leave
it
as
the
default
for
now,
we
can
actually
see
all
the
parts
of
the
cluster
coming
up,
including
the
PG
admin,
the
asura
and
the
migrations
occurring.
So
when
the
migrations
are
up,
this
is
where
API
snoop
is
bringing
in
the
latest
data
from
somewhere
within
the
last
four
hours.
B
B
It's
going
to
create
a
replication
controller
with
a
static
label
and
go
through
this
outline.
We
have
another
one
for
core
v1
mocks
at
us.
That
is
two
and
it's
based
on
looking
stuff.
It's
focusing
on
operation.
Do
you
like
note
status
and
not
delete
or
create,
because
we're
focusing
on
the
status
portion,
it
lists
the
nodes,
finds
a
layers
created,
node
and
patches
it
and
make
sure
that
it's
not
the
status
is
not
is
not
ready
and
then
our
last
one
is
these
four
endpoints
around.
A
A
D
D
C
B
B
C
Yeah,
so
we
initialize
initially
start
the
test
by
creating
a
replication
controller
with
the
static
labels.
So
when
we
list
the
replication
controllers
later
in
the
test
that
we
can
just
use,
a
label
select
the
item
as
requested
in
previous
Pia.
The
best
way
to
list
the
images
by
using
a
label
select.
This
Oh
doesn't
go
through
everything
on
the
cluster
just
in
case
it
might
be
a
lot
and
then
we
add
a
new
label.
I.
A
B
C
In
terms
of
the
scale
we
watch
for
a
change
because
we
set
up
a
watch
earlier
in
the
test
sports
once
we
create
the
and
then
we
have
some
fixed
variables.
So
if
we
decide
maybe
we'll
make
this
test
with
three
replicas
in
this
replication
controller,
then
we
can
change
that
in
a
more
templated
way.
And
then
we
have
like
a
mask
replica
County.
C
A
B
So
now,
let's
look
at
how
how
we
do
that
lists.
The
nodes
find
the
latest
created,
node
patch
it
to
say
it's
not
ready
and
then
set
get
the
node
status.
To
ensure
ready
is
false,
I'm,
not
sure
about
that
one.
B
A
I
mean
okay,
so
we're
talking
about
no
it's
right,
we're
definitely
into
this
is
like
he
will
be
doing,
and
it's
definitely
privileged
and
it's
definitely
gonna
be
okay,
but
it's
gonna
be.
If
we
do
it
on
an
existing.
No
that's
gonna
be
potentially
disruptive
I,
don't
I,
don't
know
Clayton
if
you
know
like
if
we
start
messing
with
the
node
status.
D
D
Those
are
like,
we
would
probably
say,
that's
a
serial
test
and
we
have
to
be
careful
about
how
many
serial
conformance
tests
yet
to
some
degree,
though
that's
I,
think
that
it
is
a
important
enough
to
say
that
a
cluster
should
respond
to
some
of
these
things
that
I
can
buy.
We're
gonna
have
to
start
moving
into
serial
disruptive
tests
and
we
aren't
going
to
be
able
to
require
someone
to
create
a
special
machine.
That's
isolated
for
other
reasons,
so
it
comes
down
to
do.
D
D
D
D
D
There's
all
sorts
of
problems
with
creating
our
node,
because,
for
instance,
and
like
this
is
a
general
problem
today
so
like
cube
today,
only
support
for
most
of
the
cloud
providers
almost
supports
a
single
region.
We
don't
support.
We
do
not
support
having
the
cloud
provider
on
for
something
like
AWS
and
then
having
nodes.
Node
objects
that
exist
that
don't
match
instances
know
the
amount
control
that
will
come
along
and
try
to
delete
it.
So
it's
probably
not
worth
going
down
that
route
for
now,
although
arguably
that
is
a
scenario
that
should
be
supported.
A
D
D
Unfortunately,
your,
unless
you
can
create
a
pod
that
will
run
on
the
node
that
guarantees
that
the
node
will
shut
down
across
every
operating
system
that
can
run
on
cube,
I.
Think
you're
gonna
run
into
a
lot
of
problems
with
this,
so
this
Testament
just
may
not
be
possible.
With
the
current
set
of
constraints,
we
have
on
a
conformance
test.
D
A
C
D
In
general,
one
of
the
things
that
is
a
challenge
today,
I,
don't
think
this
is
conformance,
is
job
initially
is.
We
are
under
covered
on
the
sort
of
core
consistency
testing
that
would
catch
issues
like
this
because
of
the
diversity
of
platforms.
That's
causing
problems
for
lots
of
people
like
problems
with
pot
safety
guarantees
can
be
violated.
If
someone
added
a
controller
to
the
platform.
That
did
something
weird.
We
don't
have
anything
to
test
for
that.
B
B
B
C
C
B
A
A
When
you
create
a
service
with
a
selector,
then
the
endpoints
controller
is
going
to
manage
the
endpoints
associated
with
that
service,
and
so,
if
we
start
messing
with
things,
out-of-band
right,
you're
going
to
end
up
in
the
same
kind
of
thing,
where
you
have
a
race
condition
between
what
we're
changing
and
what
endpoints
controller
is
changing.
So
we
can
test
these.
A
E
E
B
A
D
A
A
selector,
so
it's
possible
to
create
a
service
without
a
selector
that
is
internally.
Selector
is
one
of
the
fields
of
service,
in
which
case
the
endpoint
controller
simply
ignores
it,
and
you
have
to
manually
create
the
endpoints.
That's
what
we
would
have
to
do
in
this
case.
That's
what
I'm
trying
to
eat
the
other
case
of
how
the
endpoints
controller
tests
conformers
test
work
would
already
be
covered
by
our
service
tests.
We
have
a
bunch
of
tests
already
I
believe
in
conformance
that
check.
When
you
create
a
service,
you
get
actual
data
plane.
A
B
A
Along
with
services,
you
know
typically
going
to
create
Justin
coins
on
their
own,
and
so
you
would
want
to
probably
create
a
service
to
goes
on
with
this
and
then
use
the
same
data
plane
pieces
for
the
service.
Against
that
we
do
checks
that
we
do
other
services,
those
are
done
for
services,
they
use
the
endpoints
controller.
These
would
be
for
services
that
are
manually
created,
I,
don't
think
we
right
now
what
we
don't.
A
Thank
You
Josh
right
and
that
basically
tells
the
endpoints
controller.
This
is
out
of
your
out
of
out
of
here.
Don't
do
anything
with
the
service.
Essentially
manually
Ethan's
end
points,
and
we
validate
that
the
service
functionality,
which
it
should
actually
means
that,
like
those
will,
still
get
come
boxing,
even
though
I
manually
created
endpoints,
because.
E
B
C
A
D
A
D
B
Caleb,
what
I
suggest
is
also
look.
Look
at
all
the
the
what
do
they
called
the
user
agents
that
hit
the
endpoint,
and
we
should
be
able
to
tie
back
if
it's
getting
hit
by
the
endpoint
controller.
We
could
should
be
able
to
bring
the
logs
together
to
figure
out
which
test
is
hitting
it.
Is
that
does
that
make
sense,
yeah.
C
B
Whether
it's
try
to
spell
slightly
better,
it's
not
working
the
day,
alright
agreed
for
backlog.
So,
let's,
let's
say:
okay,
there
cuz
we're
running
out
of
time
that
someone
take
this
new
one.
This
is
node
status
that
was
endpoints
actually
so
endpoints
now
goes
underneath
here
we
did
a
promotion
and
let's
stop
there.
There
are
plenty
of
work
in
the
backlog
and
we've
sorted
things
correctly.
There's
a
few
things
on
promotions.
B
Most
of
our
work
in
the
last
two
to
two
to
four
weeks
has
been
on
increasing
the
velocity
of
us,
creating
the
mock
test
versus
getting
the
promotion's
through,
but
I'm.
Looking
forward
to
that,
speeding
up
quite
well
and
for
the
last
little
part
of
our
demo
it's
up,
and
then
we
can
go
to
API
snoop
dot
HH.
That
I
had
a
co-op,
slash
coverage,
and
this
is
what's
loaded
and
how
we
run
our
test
and
inside
of
our
alright.
They
won't
continue
any
further
with
that.