►
From YouTube: 20200908 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning
or
good
afternoon,
depending
on
where
you
are
welcome
to
sig
architecture's
sub
project
for
conformance
I'm
hippie
hacker,
your
host
today,
rhiann
in
south
africa,
will
be
taking
notes
for
us
on
our
agenda.
Today,
we've
got
a
few
items
around
our
tooling
and
our
our
requests
from
the
cncf
to
have
a
basically
a
talk,
condensed
down
for
kubecon
virtual,
and
we
need
to
get.
A
We
need
to
commit
to
that
this
week
and
steven
has
some
would
love
some
feedback
on
proxy
endpoints
and
john
wanted
us
to
know
next
meeting
that
we'll
have
a
discussion
as
brad
and
cerani
can
be
here.
Although
brad
good
to
see
you
here.
B
A
Fair
we'll
get
through
some
of
the
fun
fun
bits,
quick
for
you.
I
I'll
go
ahead
and
run
into
this.
The
we
have
a
qmax
qmax.org,
that's
heavily
used
within
our
our
flow
and
that's
on
github.com
qmaxqmax,
but
I'll
do
a
little
demo
for-
and
I
might
do
that
after
at
the
at
the
end.
A
Just
so,
whatever
time
we
have,
we
will
do
I'll
start
it
so
that
the
deployment
can
run
while
we're
while
we're
working
together
and
I'll
just
do
that
in
a
terminal.
It's
the
way
that
we
have
been
deploying
not
I
current
yeah
item
is
a
little
nicer
than
the
one
that
comes
with
os
x.
A
And
we
have
our
q
max
repo
that
has
our
infra
and
our
packet
has
been
super
generous
in
donating
resources
for
the
cncf
to
work
on
projects
like
api
snoop,
and
we
have
a
cluster
cluster
config
script,
which
I'll
share
here
in
a
minute.
A
Look
at
it,
I
think
it's
cluster
setup,
so
we'll
look
at
it
real,
quick.
This
is
the
emacs
inside
of
kubernetes
inside
of
the
containers
inside.
It's
how
we
do
all
of
our
things,
and
in
this
case
it's
a
pretty
straightforward
use
cluster
cuddle
to
connect
the
packet
and
bring
up
a
nice
setup
so
that
we
can,
inside
of
that
as
we're
setting
up
the
cluster
configure
audit
logging,
because
we
lost
the
dynamic
audit
logging
prior
to
119
119.
A
We
depleted
that
alpha
feature
that
was
super
useful
for
us
and
make
this
a
little
bit
bigger,
and
also
we
can
can
you
someone
drop
a
link
in
the
chat
to
this
file
from
ii
once
we
go
through
and
bring
up
the
cluster,
we
deploy
an
api
database
called
snoop
db
as
part
of
our
deployment
and
with
and
then
also
deploy
emacs
within
that
that
goes
through
our
work
files
help
us
identify
tests.
A
So
this
is
that
script
we're
about
to
run
so
that's
cluster
setup
and
it
needs
a
client
cluster.
So
I'm
gonna
do
kind
delete
clusters
to
make
sure
we're
starting
from
from
nowhere.
So
this
should
work
on
any
cluster
in
in
the
world
that
runs
kubernetes
as
far
as
the
initial
bootstrap
and
then
we're
going
to
be
creating
a
cluster
on
packet
from
scratch.
That
will
allow
us
to
do
all
of
our
testing.
C
A
What
this
will
allow
us
to
do
once
you
have
a
cluster
when
you
run
this
script,
you
by
the
time
it's
finished,
it
will
be
directly
inside
the
test.
Writing
environment
with
all
of
the
latest
ci
runs
with
the
audit
logs
there.
So
we
can
query
for
untested
endpoints
as
well
as
setting
up
an
audit
logger
sync
that
goes
to
a
database.
You
can
query
to
see
if
the
application
that
you're
testing
or
writing
or
that
test
actually
does
cover
those.
A
I
might
do
let's
go
ahead
and
drop
that
away,
I'm
inside
q
max
we
have
our
our
infra
folder
and
we
have
our
cluster
api
folder
and
we
have
our.
You
need
to
get
a
secrets
env
that
you
update
with
your
your
secrets
or
your
packet
project,
idiom
api
key.
If
you're
interested
in
and
helping
us
with
this
stuff,
I'm
sure
we
can
get
you
a
key.
The
other
variables
are
some
stuff
for
the
size
of
the
control,
plane,
nodes
and
worker
nodes.
Because
we're
just
writing
tests.
A
Are
you
sure
this
is
the
context
you
want
to
do
during
initialization
we
read
through
and
ensure
that
we
wait
for
the
cappy
web
hook
system
and
packet
system
to
be
deployed
sleep
for
just
a
moment
to
make
sure
that
we
can
create
a
using
cluster
cuddle
to
create
a
packet
from
our
template,
which
I'll
bring
up
in
a
moment,
and
then
we
go
through
and
and
check
for
the
machine
that's
available
on
packet
to
come
up.
A
So
if
we
go
check
our
deployment,
it
is
currently
waiting
for
the
packet
cluster
api
provider,
the
control
plane,
controller
manager
to
come
up
before
it.
Does
this
these
next
steps,
while
we're
we're
waiting
on
that,
I
will
launch
a
new
window
and
we'll
go
up
a
folder
so
that
you
can
see
our
cluster
packet
template,
and
this
is
the
cluster
api
definition
for
our
api
server.
A
Note
that
we
had
to
bring
in
some
extra
files,
both
in
the
arguments
provided
to
the
api
server,
so
that
we
could
have
an
audit
policy
and
a
definition
for
an
audit
sync.
But
I
did
not
see
an
easy
way
for
cluster
api
to
pass
files
along,
so
we
did
some
here
docking
in
our
post,
kubat
men
commands
and
our
pre-admin
commands.
A
So
for
the
pre.
We
do
some
stuff
around
networking
to
make
sure
that
our
control,
plane
endpoint
comes
up.
It's
it's
not
part
of
the
machine
itself.
It's
a
delica!
It's
a
it's
a
dynamic
ip,
that's
moved
around
in
packet,
then
we
go
through
a
normal,
pretty
much
normal
kubernetes
deployment
and
including
weave
and
set
up
nginx
and
grass
and
ensure
that
a
few
things
happen
down
to
our
product
to
get
metal
lb
running
and
at
the
very
end.
A
A
Then
we
go
through
and
deploy
qmax
itself,
which
is
this
cluster,
this
this
repo
that
we're
starting
from
using
a
helm
chart
to
deploy
our
api
snoop
focused
version
of
of
humex,
including
setting
our
time
zone
to
you,
know
to
your
part
of
the
world
and
your
get
off,
and
so
you
can
commit
and
push
from
within
this
truly
cloud
native
development
environment.
A
We
make
sure
we
just
call
the
user
ii,
because
what
I
is
is
usually
people
pairing,
so
there's
two
people,
so
each
of
you
are
an
I
and
you
you
share
a
you
know,
ssh
or
in
this
case,
when
we're
done,
we
normally
hand
out
cube
configs,
because
if
you
send
somebody
like
kubernetes
community,
a
coupe
convey
to
a
dedicated
box.
So
you
run
these
commands.
A
They'll
understand
a
lot
about
that
cluster
as
you're
you're
pairing
it
together
and
that'll
make
a
lot
more
sense
here
in
a
minute,
as
I
share
that
with
with
everybody
on
the
call.
A
In
order
for
us
to
to
get
the
policy
and
the
audit
sync
down,
we
make
a
directory
inside
pki
because
that's
already
propagated
to
the
the
node,
the
packet
node
and
that
deploys
so
it
makes
sure
that
all
the
way
through
to
the
api
server
container,
that
our
audit
policy
and
audit
sync
are
in
place
actually
last
steps
there
to
make
sure
that
docker
gets
installed
before
we
end
up
running
because
cube
admin
is
used
to
deploy.
A
So
there's
a
couple
of
things
for
the
node
type,
not
much
of
interest
in
the
rest
of
this
file
there.
So,
let's
see
how
our
deploy
is
going.
It
is
still
waiting
for
condition
ready
on
this
interesting,
any
questions.
So
far,
can
you
make
sure
we've
deleted?
All
of
the
other.
E
Quick
question,
so
the
steps
that
you
just
went
through
is
that
documented
on
the
umac
website.
A
We
are,
let
me
pull
up
the
website
for
qmax,
so
right
now,
the
folder
that
we're
in
I'll
start
at
the
top.
So
if
you
go
to
qmax.org
currently
that
website,
redirects
to
the
umax
org
on
github
and
there's
a
humax
repository
at
the
top
level,
if
you
don't
want
to
do
all
the
stuff
we're
doing
in
cluster,
just
for
qmax
itself,
not
the
way
ii
uses
it
to
use
stuff
in
cluster,
you
can
just
get
clone
in
your
home
dirt
or
anywhere
on
your
system
and
just
export
the
emax
load
path.
A
Here,
with
this
locally,
this,
this
local
thing,
I'll
drop
that
into
into
the
chat.
Oh,
that
was
not
the
right
url.
Let
me
try
that
again.
E
I
guess
my
follow-up
question
is:
if,
if
I
wanted
to
run
this
locally
or
to
run
the
the
the
conformance
stuff
locally,
do
I
have
to
go
through
all
the
steps
you
just
enumerated.
A
A
A
Yes,
it
is
what
we
I'm
trying
to
figure
out
where
we
have
that
information.
It's
probably
in
the
an
api
snoop.
A
Because
part
of
this
is
the
deployment
of
of
set
up
with
kind
and
we're
trying
to
figure
out
where
our
our
kind
docs
are
maybe
under
or
kind
plus
audit
sync.
So
this
is
from
20
days
ago,
so
maybe
a
little
bit,
and
this
is
a
little
bit
verbose
kind
of
an
exploration
and
it
focuses
on
like
exploring
dockers
and
desktops
things
at
the
top.
This
is
how
you
would
build
kind
to
make
sure
your
kind
config
was
built
from
source.
A
Then
you
would
need
to
have
your
your
kind
config
itself.
So
I'd
say
this
is
probably
a
good
starting
point.
I'll
drop
this
into
the
chat
wherever
it
went
there
you
go.
A
That
is
the
documentation
for
what
you
would
need
in
your
audit
policy.
What
you
would
need
in
your
audit
sync
note
that
this
endpoint,
this
audit
sync
yaml,
it's
just
a
kubeconfig,
but
it
needs
to
have
a
hook,
and
so
it
sends
it
to
this
event
and
are
doing
a
bit
of
crazy
zooming
there.
The
kind
config
worker
nodes
don't
need
it
deployed.
So
it's
kind
of
sitting
here,
I'm
somehow
touching.
A
Give
me
a
moment
to
figure
that
one
out,
I
think
in
this
same
folder.
If
we
look,
those
files
are
probably
there
yeah
they're,
not
they're,
not
we
call
it
org
tangling.
So
those
if
you'll
just
take
those
two
files
and
put
them
out
the,
and
we
can
probably
put
this
out
for
you.
So
you
need
a.
F
A
Kind
config
for
it
to
deploy
and
the
kind
config
has
to
do
a
few
things
for
us
because
we're
we
use
the
docker
socket
to
communicate,
and
we
also
want
temp
passed
in
our
our
worker
nodes.
Have
some
extra
things
passed
in
the
control
plane
nodes?
Add
that
extra
mounts
for
pka
audit
so
that
we
can
get
the
audit
files
within.
A
That
is
based
on
the
host
path,
having
an
audit
folder
with
those
two
files
in
there.
A
These
are
the
extra
port
mappings
so
that
we
can
reach
in
to
the
the
web
services
in
this
case,
5432,
which
is
a
postgres,
and
we
do
some
patching
to
make
sure
that
our
ingress
has
come
up
because
we
used
to
have
a
web
interface
that
we
would
run.
And
then,
if
you
have
a
public
ip,
we
use
iinz
iii,
co-op
and
sharing.
I
o
and
usually
people
put
their
username
in
front
of
it.
A
So
you
have
either
a
star
certificate
or
a
ssl
ingress
that
sets
up
her
for
a
server
also,
so
that
the
tube
cutter
will
just
work.
So
you
can
give
the
cube
cuddle
to
a
friend,
for
example,
and
it
would
just
reach
into
that
cluster.
The
main
thing
is
making
sure
that,
within
the
api
server,
when
it
starts
up
that
it
has
the
extra
arguments
to
reach
within
the
api
server
container
to
the
files
that
have
been
brought
in
defining
the
syncing
policy.
A
So
when
the
api
server
comes
up,
when
the
cluster
is
up
and
api
snoop
is
up
that
all
of
the
audit
logs
will
go
all
the
way
into
that
database
so
to
create
it,
you
just
bring
up
your
coop
cluster
kind,
create
config
kind
as
kind
create
cluster
kind
config,
and
you
can
just
use
the
in
this
case.
We
were
trying
to
do
it
from
source,
but
you
could
probably
do
sans
using
the
latest
version
just
by
writing
that
command.
A
A
Sometimes
it
takes
15
or
20
minutes
the
we,
the
cloud
in
it
goes
through,
and
that
comes
from
cluster
cuddle
to
not
only
deploy
kubernetes
but
deploy
all
of
the
other
pieces
that
we
need
for
not
just
the
api,
snoop
audit
sync
and
configuring,
the
api
server
to
sync
to
that,
but
additionally
humex
running
within
the
cluster,
so
that
we
have
already
checked
out
the
api
snoop
test,
writing
repository,
which
we'll
probably
take
out
of
the
api
snoop,
the
the
repo
within
cncf
and
put
it
as
a
test
riding
repo
underneath
the
api
snowboard,
or
maybe
we
put
it
in
a
sig
conformance
testing
repo,
because
that's
the
actual
content,
our
files,
our
org
files,
when
we
write
them
for
tests,
the
org
files
export
to
markdown
and
we
copy
and
paste
that
mark
down
to
create
a
ticket.
A
So
the
flow
is
consistent
in
how
we
create
these
fairly
complex
agreements
on
what
we,
what
work
is
left
to
do
how
we
accomplish
that
work
and
get
it
reviewed
and
approved
by
the
community
before
we
write
our
pr,
because
we
had
some
trouble
early
on,
we
go
through
and
write
the
pr
and
we
maybe
get
it
all
the
way
through.
A
A
Commuter,
we
can
probably
take
a
look
at
that
by
just
looking
at
the
github
kubernetes
org
project,
nine
and
anything
where
you
see
write
some
tests
with
some
endpoints
I'll.
Just
click
on
this
stuff.
That's
in
the
sorted
backlog
you'll
recognize
this
very
shortly
where
it
says.
Where
is
the
original
issue?
A
These
are
the
six
steps
that
we
think
are
important
for
ensuring
a
conformant
cluster,
and
then
we
have
a
small,
sometimes
small,
a
section
of
golem
code.
This
code
does
not
use
ginkgo.
This
code
is
at
the
at
the
simplest
level
to
show
the
logic
of
what
the
test
would
do
and
it's
not
synced
to
a
file.
It's
just
saved
to
the
org
file
in
the
ticket
and
when
we
run
that
code
within
our
org
file,
it
creates
the
standard
output.
A
We
normally
just
try
to
match
that
to
what
our
test
writing
you
know
outline
was
at
pop
and
that
ran
within
the
cluster.
While
the
audit
logger
was
running
so
then
we'd
select,
distinct
user
agent
to
do
did
our
live.
Write
that
our
live
test
writing
golan
code
actually
hit
the
api
server
and
show
up
in
the
logs.
A
That's
just
a
quick
test,
then
we
say
well
from
the
end
points
hit
by
the
new
test,
were
they
originally
hit
by
some
other
ede,
and
we
can
see
that
this
test
hit
this
one,
but
it
hit
these
other
ones
that
were
not
hit.
So
this
is
looks
like
this
might
get
about
six
endpoints
and
and
if
we
have
this
other
function
or
the
other
sql
function,
we
select
from
projected
changing
coverage.
We
can
note
that
previously
old
coverage
is
181
points.
Now
it's
187
and
we
can
identify
those
endpoints
by
that
list
up
there.
G
G
A
You
posting
the
chat,
I
missed
it,
sorry
having
bad
internet
next
day,
okay,
we'll
reach
out
to
clayton
later,
so,
basically
that
it's
you
it's
you
and
us
today
so
feel
free
to
be
super
verbose
in
your
thoughts
and
questions.
E
Not
a
problem,
so
actually
I
do
have
a
question.
The
the
ticket
you
were
just
showing
was
that
generated
by
the
by
the
tool
by
the
umac
max.
A
Tool
yep
and
then
we
can
actually
go
back
and
find
the
original.
So
this
is
the
apps
v1
daemon
set,
and
this
is
a
ticket
right.
But
if
I
go
into
github.com
cncf
api
stop
the
folder
that
our
project
will
bring
up
here
shortly
is
in
the
tickets,
folder
and
the
tickets.
Folder
has
tickets
for
case
and
there
will
be
a
what
was
the
name
damon
set.
I
look
for
damon
in
here.
A
Maybe
it's
missing
I'll,
just
I'll
pull
up
one!
Okay,
that's
really
recent!
When
I
start
two
months
ago,
once
again
last
month,
no,
that's
fine!
Let's
look
for
one!
For
two
months
ago
we've
been
focusing
on
on
a
specific
set
here
recently,
so
this
is
node
proxy
options
test
from
two
months
ago.
Here's
the
org
file
for
node
proxy
options
test
and
it
renders
in
github,
okay,
but
not
necessarily
perfectly
like
our
test
outline
is
missing
some
things.
A
A
And
when
we're
inside
our
cluster
editing
this
org
file,
we
hit
a
sequence
that
generates
the
markdown
file.
We
don't
have
the
automation
in
place
to
automatically
populate
the
ticket,
so
our
robot
is
rion
rion,
the
the
ticketmaster
so
that
we
have
a
human
who
can
update
the
the
the
text
in
the
top
block
of
the
github
ticket.
Thank
you
so
that
anyone
can
work
on
any
of
our
tickets
and
still
update
the
text
at
the
top,
because
rihanna's
the
one
responsible
for
updating
the
top
of
the
ticket.
E
A
Functional
so
we
have
some
some
lisp
code
that
we
wrote.
I
think
that
was
zach
he's
on
the
call,
and
there
is
a
take
that
code
block,
that
we
wrote
and
turn
it
into
a
test.
Is
this
it
here's
yeah?
This
takes
the
mock
test
and
turns
it
into
a
ginkgo
test.
A
Okay
and
depending
on
the
complexity
of
the
code
there
and
some
specifics
in
ginkgo,
it
will
create
the
text
that
you
can
drop
into
a
file
to
create
the
pr
it's
not
fully
automated,
because
we
don't
know
we
can't
guess
I
mean
I
guess
we
could
add
a
few
more
people
right
now.
It
just
generates
the
text,
the
golan
code,
that
is
ginkgo
based
based
on
the
non-ginko
code,.
E
A
With
pr
soaking
so
we'll
click
on
one
of
those,
maybe
a
little
more
interesting.
So
this
particular
this
one
is
three
steps:
five
of
seven:
it
updates
that
number
as
you
progress
through.
A
So
we
have
the
approval
issue,
the
api
student
pr.
So
this
is
where
we
had
the
initial
pr
with
an
api
snoop,
which
was
just
an
org
file,
no
ppr,
so
the
markdown
and
the
org
file.
This
is
the
markdown,
and
this
is
the
org
file
and
that
feeds
into
the
kk
issue
and
the
k,
and
we
we
take
that
code
down
here
inside
of
the
code
block
and
we
run
exact
function
on
it
to
create
a
pr
and
the
pr
is
right
here.
So
this
is
the
pr.
E
E
Good,
so
that
code
is
legitimate,
although
it
wasn't
you,
it
wasn't.
Human
generator
right.
A
At
all
the
this
was
this:
was
it
usually?
What
happens?
Is
it's
automatically
generated
from
zach's
code
and
then
you
do
your
edits,
but
for
the
most
part
it
runs
steven
you
could
probably
best
speak
to
it,
and
caleb
is
around
the
corner.
He's
in
your
shot.
It's
not
perfect,
but.
E
A
Yeah
and
and
what
I'm
waiting
for
here
is
for
this
to
come
up.
We
missed
those
last
couple
of
steps
there,
so
I
will
coup,
let
me
see
which
cluster
when
who
will
get
actually
I'll
do
this?
You
know
I
split
in
things
I'm
just
going
to
create
a
new
one,
just
to
make
sure
so
in
my
q
max
a
cluster
api
folder.
A
A
Over
here,
so
the
first
thing
we're
going
to
do
is
and
I'll
do
this
one
step
at
a
time,
so
we
can
kind
of
see
it
all
if
you
retrieve
it,
that's
your
cube,
config
this
command
here
and
that's
how
packet
the
cluster
api
works,
and
so
just
before
that
I'm
going
to
go
ahead
and
export.
We
got
the
base64
to
code,
it
decode!
A
C
Right,
okay,
so
let
me
find
you
in
the
in
the
slack.
C
A
That
is
not
the
right
file,
so
let's
try
that
again.
So
it
may
be
that
I
just
need
to
copy
that
out.
So
I'm
going
to
copy
my
cube
hh.
Is
it
a
change?
Yeah,
it's
packet
packet,
hh
I'll,
put
that
my
downloads
folder.
A
F
A
I'm
not
a
mac
user.
There
we
go
today.
Now
you
will
have
my
my
box,
my
key,
my
my
cluster
and
I'm
gonna
do
the
same
thing
that
am,
I
expecting
you.
C
A
A
And
then
the
file
that
I
just
sent
you
packet,
hh
cube
max.
I
can
do
cuddle,
get
pods
and
all
the
spaces.
G
A
A
A
This
one's
been
up
21
minutes,
so
we
may
go
just
use
the
other
one
real,
quick,
it's
a
stage
to
the
other
one.
C
A
Config
and
then
cp
sudo
shown
io.iii.
A
And
then
I'm
gonna
pop
out
real,
quick
and
scp
that
over
here,
so
the
dot
could
config
to.
C
C
Always
have
a
backup
in
case
your
demo
dies.
That's.
A
C
A
Ssh,
can
I
get
your
your
github
username.
E
Y-I-V-I-E-N
I'll
I'll
post
it
in
the
trying
to
pull
it.
E
E,
you
have
an
extra
a
it's
just
e
n
v.
V-I-V-I-E
there
you
go,
this
will
get
you
in.
A
From
directly
from
the
all
right,
I'll
drop
the
host
to
you,
so
you
should
sha
ssh
to
ii
at
that,
and
so,
if
I
back
out
as
well,
like
I'm
gonna
ssh
to
ii
at
that
address
and
now
we're
in
and
then
on
the
host
level.
We
won't
have
a
lot
of
things
going
on,
but
our
pause
and
stuff
should
should
work.
A
A
Back
to
here,
and
we
need
to
decide
on
the
conformance
meeting,
to
meet
this
to
go
ahead
and
commit
to
that.
I
will
put
that
in
the
channel
stephen.
Do
you
want
to
go
over
your
your
redirect.
I
If
you
just
open
up
the
link.
I
Can
you
yeah
so
the
test?
I
was
hoping
to
get
some
feedback
on
if
there
was
actually
a
way
of
using
the
rest
client
to
be
able
to
stop
on
line
299
we've
got.
I've
got
a
comment
there
around
it's
at
this
stage,
doing
a
500
because
it's
doing
an
automatic
redirect
using
if
we
scroll
up
just
a
fraction,
it's
using
the
rest
client
at
this
and
that
part
of
the
test.
I
I
I've
got
a
little
bit
of
an
authentication
issue
where,
where
I'm
ceasing
on
3
22,
where
I'm
seating,
the
authorization
bearer
it,
of
course
that's
working
inside
the
cluster
but
outside
it's
not
so
I
was
hoping
for
some
feedback
on
the
difference
between
inside
and
outside
of
a
cluster
for
this
stuff.
I
A
One
of
the
things
we're
trying
to
do
is
there's
a
bunch
of
eight
endpoints,
the
proxy
endpoints,
specifically
that
the
conformance
definition
is
that
they
provide
a
301
redirect
in
order
for
us
to
test
this,
we
need
to
actually
see
the
301
redirect
and
verify
that
it
goes
to
the
right
location.
A
If
we
use
the
go
client
from
kubernetes,
it
consumes
those
and
doesn't
allow
us
to
see
the
301
redirects,
so
we're
forced
to
use
the
http
client
raw
in
order
to
feed
the
rest
apis
that
we
need
in
and
then
look
at
the
response
stream
without
following
it,
because
we
don't
need
to
follow
it.
We
just
need
to
verify
that
it
goes
to
the
right
place
when
you're
inside
the
cluster.
I
believe
steven
is
pulling
up
the
configuration
for
the
client.
He
sets
it
a
little
earlier
here.
I
Configuration
it's
a
little
bit
further
down
all
right
on
five
3
12.
J
I
D
I
I
A
A
Yeah,
because
you
don't
have
a
client
certificate
in
cluster.
You
have
that
that
authorization
token
that
sits
within
a
file
in
a
special
location
on
the
disk
that
the
goling
libraries
know
how
to
load.
A
I
E
But
it
looks
like
on
309
you're,
turning
off
tls
check
anyway,.
A
We
saw
steven,
I
think
we
did
that
so
that,
because
we
don't
load
the
tls
certificate
to
verify
the
server
we
should
probably
we
need
to
pull
that
server
verification
somehow,
but
I
think
that'll
be
a
little
bit
difficult
and
since
this
is
a
test-
and
we
know
the
end,
points
are
set
correctly-
that's
less
important.
However,
I
don't
think
the
api
server
is
going
to
authenticate
us
unless
we
have
an
https
client
certificate,
which
is
how
most
coup
configs
are
laid
out.
Yeah.
I
Okay,
it
was,
it
was
just
based
off
the
the
co-castle.
The
raw
logs
was
actually
showing
the
curl
command,
using
a
k
option
for
just
skipping
the
verifying
this,
so
that
was
the
hundred.
I
was
using
the
error.
I
A
My
hope
is
that
client
config
has
both
your
client
cert
private
certificate
and
your
public
key
for
the
ssl
server
verification
that
you
can
work
into
your
http
client
request,
but
I'm
not
sure.
A
E
Okay,
I
would
like,
I
think
I
told
you
that
I
will
take
you
that,
on
my
offer,
I
just
haven't
had
time,
because
I'm
very
curious
about
how
conformance
how
the
whole
thing,
how
the
mechanics
of
it
work.
A
J
A
This,
if
we
look
at
this
crowd,
job
we're
trying
to
go
ahead
and
have
a
release
blocking
job,
and
this
is
just
the
outputs
from
that
job.
On
the
cncfs
pro
instance,
that
actually
runs
the
database
and
we'll
just
skip
to
the
important
part
at
the
bottom.
There
are
zero
untested
endpoints
that
got
added
and
on
this
other
one
the
failed
one,
you
tried
to
add
101
new
endpoints,
that's
not
allowed,
and
and
and
that
would
be
a
ci
signal
for
the
release
not
to
go
forward
until
those
end
points
had
tests.
A
A
Mine's
broke,
I
did
something
I'll
I'll
claim
the
fail
on
my
part.
If
you
run-
and
we
can
do
that
real
quick,
it
doesn't
take
fairly
long.
So
if
I
copy
that
url
and
we
go
into
a
google,
because
I
should
have
a
working
class
for
your
google
get
pods,
not
this
one,
so
I'm
gonna
unset
code
config,
so
we're
not
using
the
broken
one,
and
this
is
my
local
kind
cluster
and
if
I
do
apply
dash
f,
all
right
put
those
urls.
I
A
There's
our
job-
and
I
think
it
it's
logs-
is
that
how
that
works?
It's
still
container
created.
F
A
We
have
to
wait
for
it
to
get
there.
If
I
describe
describe
the
job
it's
probably
pulling
down
now,
it's
created
the
pods,
so
it's
still
contained
and.
A
Created
yeah,
it's
a
pretty
decent
size
information.
It's
been,
you
know
when
this
job
comes
up,
it's
the
same
job
you
saw
there
and
if
the
the
upstream
image
is
inside
of
our
api
snoop
repository
itself
so
or
git
hub
org,
so
get
hub.com
api
snoop
and
there's
a
snoop
snoop
db,
repo.
A
The
same
repo
that
that
that
url
came
from-
and
it
is
a
pretty
cool-
this
also
uses
a
lot
of
org
files
for
how
we
create
it
and
build
it.
You
can
connect
back
to
the
database
itself
without
a
cluster,
because
the
the
work
that
it
does
this
is
the
postgres
all
of
our
database
functions
here.
They
get
that
brought
and
run
from
our
org
file
in
the
end,
create
a
a
load,
all
audit
events
that
what
is
that
initial
function.
J
The
python
functions
are
outside
of
here,
they're
loaded
into
the
container
itself,.
J
A
Yeah
glad
this,
I
would
start
you
want
to
understand
how
the
underlying
deep,
deep
things
work.
This
file
here
is
the
function,
the
sql
function,
we
load
when
the
database
gets
created
or
when
the
container
comes
up
and
it
does
use
some
python
code
outside.
This
is
where
we
from
snoop
details,
determine
bucket
job
and
download
and
process
audit
logs.
A
We
and
we'll
look
back
at
those
real
quick.
It
just
goes
through
and
determines
what's
the
most
recent
successful
jobs
in
test
grid
and
it
also
downloads
all
of
the
stuff
so
that
we
can
pull
it
back
into
db.
I
won't
go
super
deep
into
that.
A
From
this
preview
that
also
says
where
was
120
119
and
what
not
defined
we
can
probably
bring
that
up
as
well.
We
should
probably
do
this
in
another
call,
but
I
I
this
may
be
helpful
for
you.
So
if
you
want
to
see
where
things
are
going
where's
that
other
file,
so
here's
the
the
python
file
that
I'm
dropping.
C
A
Each
of
the
the
buckets
when
they
get
created
has
a
finished
json
and
that
lets
us
know
some
information
about
the
job.
So
we
know
what
version
it
was
and
then
we
go
through
and
inject
the
audit
logs
in
there
after
we've
downloaded
them
into
the
db
into
a
really
nice
schema
that
allows
us
to
query
and
they're
about
they're
anywhere
from
1.5
to
3
gigs
zach.
Did
you
see
any
over
3.5
gigs.
J
A
Yeah
they're
big
they
used
to
take
us.
Oh
my
gosh.
We
had
some
golan
code
written
to
try
to
parse
and
shove
the
raw,
and
it
was
go
lane
code
to
parse
the
json
and
shove
it
into
the
dvd
and
zach's
got
it
down
to
four
three:
four:
it's
under
five
minutes
yeah,
it's
around
two
minutes,
two
minutes
excellent
yeah
and.
A
We
that
that
underlying
database
is
in
that
that
that
image
that
was
built-
but
you
can
also
see
that
driving
api
snoop.cncf.io.
A
That
can
speak
to
it,
but
I
think
this
is
as
a
static
front
end
as
far
as
the
the
code
that's
loaded
and
then
the
javascript,
but
then
the
javascript
reaches
out
to
grab
a
json
file
that
we
generate
using
this
snoop
db.
To
say
what
are
the
end
points
and
how
are
they
hit
for
the
various
releases?
When
you
hit
switch
releases,
you
can
go
back
to
115
and
and
see
what
that
was
like,
and
this
is
all
of
the
endpoints.
A
We
also
have
a
set
of
endpoints
that
are
ineligible,
so
if
we
click
on
performance
one
over
here,
this
list
of
ineligen
points,
there's
60.,
and
we
have
very
specific
reasons
why
these
are
not
part
of
the
of
the
goal
and
here's
the
remaining
debt.
So
if
you
want
to
like
without
using
the
database,
you
can
click
here.
This
is
the
196
endpoints
that
need
tests.
E
F
A
At
just
a
second,
I
have
a
working
coupe
config
from
there
and
I'll
download
his
to
my
downloads
folder,
and
you
might
upload
it
to
okay.
F
A
A
I
think,
if
I
had
renamed
mine,
it
might
have
worked
so
and
I'm
going
to
go
into
our
shared
area.
Export
kube,
config
equals
downloads,
and
then
it
was
called
packet
bb7
so
get
pods
here
it
looks
a
lot
healthier
than
mine,
because
everything
came
up.
A
Api
snoop.
The
first
thing
you
see
there
is
the
audit
logger,
and
that
is
a
simple
bridge
between
the
api
server
sync
and
the
postgres
database.
E
Black,
oh
I'm
sorry
I
didn't
realize,
do
I
have
that
one
do
I.
H
It's
dave
day
you
you
get
to
get.
A
And
once
you
have
that,
with
your
exact
ti
dash,
you
may
got
the
dash
namespace
bb7
qmax
dash
dash,
attach
I'm
going
to
run
that
and
see
if
it
works.
It
does
not
work.
A
Oh
you're
right,
I
did
miss
the
zero,
so
I'll
make
sure
it
works
and
then
I'll
detach
and
then
I'll
paste
this.
So
once
you
got
that
kubecon
big
work
and
the
way
you
pair
is,
if
you
run
on
this
command.
J
A
Export
as
well
now
and
and
you
have
to
export
the
config,
my
mind
looks
like
that
and
the
shared
screen
and
also
send
that
too.
Oh.
E
A
Just
the
two
things
config
and
cube
cuddle,
that's
all
you
need
and
a
terminal
and
I'd
suggest
I
term
two.
If
you
have
it
yeah,
that's
what
I
have
yeah
beautiful
and
I
will
go
ahead
and
run
the
command
I've
given
to
you
and
we
will
actually
see
you
join
at
some
point.
E
Yeah,
let
me
set
the
because
I
was
trying
to
pass
it
as
like
black
yeah
as
a
flag
and
it's
not
liking
it
so
I'll.
Just.
E
That's
one
of
them
like
I
have
to
think
about
how
to
do
a
simple,
well,
there's
no
export
so
keep
config.
A
I
don't
know
if
it'll
go
well
on
the
recording,
but
if
you
want
to
share
your
screen,
I've
enabled
multiple
people
sharing
their
stream
and
you,
oh,
you
should
go
to
the
terminal
window.
You
don't
have
to
share
your
whole
screen.
I
think
it's
right
understanding
what's
going
on
between
the
two
of
us,
so
we
can
kind
of
see
how
the
pair
session
works
would
be
a
great
we're.
You
know
recording
for
later.
I
also
glad
it's
meeting
time
over,
but
it's
just
iii
and
you
right
now.
So,
okay.
E
Yeah,
I
won't
be
able
to
stay
too
long
because
it's
6
p.m.
On
my
side
of
town,
that's
fair,
but
I
was
just
going
to
try
to.
This-
is
fine
anyway,.
D
E
A
So
you
you
just
exported
it
right.
I
just
I
just
set
coop
config
and
I'm
sure
you
could
set
the
command
line
parameter
would
work
just
fine.
It
doesn't
matter
just
as
long
as
that
cube
cuddle
command
that
good
cuddle
exec
is
using.
E
J
A
That
is
a
coupe
config
bar
to
point
into
that
file,
and
it
looked
right
like
if
you
do
config
view
config
like
I'll.
Do
that
outside
of
mine.
It's
cool,
cuddle,
config
view
and
it'll
redact
the
things
but
there's
the
current
context:
db7
humac
admin
at
vb7
bb7x,
because
I've
sent
several
files
and
it's
the
last
one
that
I
oh.
A
G
E
The
first
one,
the
last
one
right,
yes,.
F
E
There
you
go,
I
was
I
was
doing
some.
I
was
praying
expressing
tab
and
was
auto
completing
the
old
one
right
right
that
same
thing,.
A
A
That's
the
end
result
of
it.
So
what
you're
seeing
is
t
mate
running
within
a
pod
that
has
a
q
ax
running
on
it?
That
has
the
api
snoop
repo
checked
out
and
we're
in
the
tickets
folder
and
we're
open
file
in
order
to
go
through
and
write
test
or
change
the
branch
and
check
the
thing
out
where
else-
and
this
is
connecting
to
and
I'll
just
go
over
here
now
that
you're
in
cluster
cool
cuddle
get
pods
is
still
communicating
via
that
socket
it's
an
admin
token.
A
G
A
There's
lots
of
pod
stuff.
What's
something
you're
interested
in
no
pause
is
fine,
okay,
pod
yeah,
and
then
we
we
know
we're
going
to
work
on
pods
and
then
we
go
find
the
deck.
We
won't
update
it,
but
the
urls
inside
here
are
pointing
to
the
documentation,
kubernetes,
api
and
client
go
and
with
a
mock
test.
I
think
the
original
telescience
says
create
a
pod
with
a
static
label
patch
the
pod.
With
the
new
thing
get
the
pod
insert
the
thing
you're
interested
in
it's
just
a
general
outfit.
A
You
could
totally
replace
it
with
whatever
you're
doing
and
then
we
have
this
go
laying
code
and
we're
not
going
to
write
something,
let's
get
comma
comma
and
that
actually
spawns
out
and
takes
that
code.
Sticks
it
into
temp.
Folder
runs
the
go
compilation
thing
on
it
and
captures
the
output
so
that
it's
stuck
in
there
at
the
bottom
of
the
file
so
that
we
can
verify
it
and
because
we're
running
an
audit
logger
we
can
capture
the
api
interactions
of
this.
E
A
I
don't
so
we
we,
we
have
the
the
audit
logger
there
and
the
audit
logger
can
be
deployed
independent
of
humax,
so
you
don't
have
to
use
our
a
max
or
umax
flow
or
anything.
Okay,
you
could
just
use
the
audit
logger
and
that
is
in
a
different
repo.
So,
but
I
mean
it's
actually
in
the
same
repo
for
api
snow.
If
you
back
up
and
look,
we
have,
I
believe,
it's
a
customized
right
and
we
have
a
customized
and
I
think
the
the
focus
for
that
is
actually
our
apps
yeah.
A
So
if
I
wanted
to
go,
look
at
the
backing
up
to
the
apps
folder
to
look
at
audit
logger
itself,
there
is
an
index
file,
that's
how
we
do
our
org
files
and
it's
all
the
documentation
and
the
source
code.
All
in
one
file
and
it
gets
tangled
out
to
the
various
pieces
of
code.
In
this
case
the
app
is
just
a
a
javascript
within
its
container
and
package
json,
and
this
one
turns
the
it
listens
on
the
right
port
and
the
audi.
A
The
api
server
sends
the
logs
here
and
it
turns
around
and
injects
them
into
the
database,
specifically
the
bucket
api
snoop
and
the
job
live
and
our.
If
we
go
back
over
so
without
using
our
tools,
you
could
just
query
the
database
to
see
what
your
applications
are
using.
E
Yeah,
I
think
that's
what
I
was
asking
is
like
the
196
untested
area.
Let's
say
I
wanted
to
create
a
test
for
one
of
those,
for
you
know
for
one
of
those
endpoints
yeah.
What's
the
starting
point,
is
it
what
we're
looking
at
on
the
screen
or
is
it
somewhere
else
to
start
contributing?
So
there's
two.
A
Parts
one
is
identifying
an
endpoint
of
importance
to
you
and
having
the
audit
logger
the
audit
sync
running
to
where
you
can
identify
them.
Oh
we're
using
that,
and
that's
because
look
at
the
user
agent
on
that.
That's
a
super
important
piece
of
software
for
us
that
allows
you
to
prioritize
that
endpoint,
okay
and
that's
not
necessarily
part
of
the
flow.
You
could
just
note.
Looking
at
your
own
logs,
that's
an
important
thing:
that's
not
tested!
Because
the
list
is
there,
it's
196.
E
Oh
okay,
because
one
thing
I
not
I,
but
we
we'd
like
to
start
getting
into
us
to
encourage
folks
to
to
write
conformance
tests.
So
I
think
part
of
the
thing
part
of
it
is
you
know
if
this
tool
can
help,
or
is
it
just
a
tool
that
lets
you
see?
What's
not
written
or
is
it
something
that
you
can
use
to
as
a
starting
point
to
start
the
process,
but
it
sounds
like
you're
saying
you
don't
even
need
this
tool.
A
Testing
right
and
then
what
what?
What?
What
what's
sad
is
that
we
used
to
have
the
dynamic
audit
sync.
So
if
anybody
had
dynamic
like
alpha
enabled,
basically,
then
they
could
just
coupe
cuddle,
apply
and
deploy
not
only
the
audit
sync
but
configure
the
api
server
via
the
dynamic
audit.
Sync
to
point
to
that
deployment
at
runtime,
because
we
lost
that
it
was
deprecated
as
an
alpha
feature
in
119,
you
have
to
modify
the
startup
parameters
to
api
server.
A
The
binary
inside
of
pi
server,
container
normally
deployed
by
cube
admin
to
have
the
audit
definition
and
the
ip
address
and
port
of
the
audit
sync.
Okay.
So
that
requirement
all
of
a
sudden
makes
it
a
little
bit
hard
for
most
vendors.
I
would
suspect
to
easily
bring
up
api
snoop
in
cluster
and
and
we
we've
thought
about
looking
at
other
dynamic
apis
like
a
an
admission
controller
or
something
to
where
we
can
get
those
api
calls.
A
But
in
order
for
it
to
work,
really,
we
need
to
have
the
user
agent,
the
http
user
agent,
to
identify
not
only
which
pieces
of
software,
but
we
updated
the
e
to
e
testing
ginko
framework
for
kubernetes
to
include
the
current
test
that
it's
on
the
full
long
string,
and
we
use
that
user
agent
full
long
string
in
the
audit
logs
to
verify
that
a
particular
endpoint
is
hit
by
a
particular
test
and
we
filter
on
conformance
to
go.
Was
this
endpoint
hit
by
a
conformance
test
or
not?
A
Okay?
I
I
don't
in
order
for
it
to
be
useful
to
that
in
general.
We
need
to
basically
we
and
we
have
a
helm
chart
for
apis
now
that
helm
chart
to
deploy
api
snoop
in
this
way.
But
you
have
to
point
your
api
server
at
it
to
begin
with,
so
it
wouldn't
necessarily
need
to
run
in
the
cluster
that
you're
testing.
D
A
E
Yeah,
that's
it
I'm
I'm
the
lead
for
cinnaboy,
so
I'm
trying
to
figure
out
here
we
go.
This
is
supposed
to
work
like
cinnaboy
right
yeah,
I'm
trying
to
figure
out
how,
where
sonaboy
could
be
useful.
F
A
My
initial
thoughts
before
we
lost
dynamic
audit
sync
was
that
we
had
something
just
like
sono
boy,
but
for
conformance
right
and
when
you
ran
the
coop
cuddle
apply
from
the
dynamic
grid
at
the
sono
boy
site.
I
I
don't
remember
the
url,
but
you
you
have
this,
that
the
what's
the
name
of
the
site.
I
forget
the.
A
A
Be
right,
I
haven't
seen
this,
but
you
you
need
to
see
this
so
give
me
a
second
to
find
it
so
sono
boy
and
I
think
it's
in
the
readme,
it's
pretty
pretty.
A
There
was
a
prerequisite
building
latest
release,
getting
started
results.
The
results
page
detail
data.
There
was
a
website
that
ran
and
you
clicked
on.
I
want
to
do
it
and
you
clicked
on
new
and
it
gave
a
like
this
is
still
command
line
stuff.
It
wasn't
like
that.
Yeah.
A
Yeah,
that's
the
site
you
were
just
on
and
then
there
was.
It
is
not
here
anymore.
C
A
What
you
did
is
you
went
to
this
website
on
sonoboy
and
you
clicked
on
new,
and
it
gave
you
a
grid
and
it
said
for
to
get
started.
Go
cuddle,
apply,
dash,
f,
this
guide,
slash,
sonoboy.yaml
and
it
okay
deploy
onto
the
cluster
and
when
the
cluster
was
done,
it
uploaded
the
results
back
to
sunboy
and
they
had
a
web
page
that
showed
them
the
results
of
their
tests,
which
is
brilliant.
A
A
So,
if
you
want,
like
that's
the
good
stuff
there,
and
if
we
can
find
a
way
to
do
it
and
bring
back
the
ability
to
do
a
dynamic
audit,
sync,
you
could
deploy
the
audit
sync
inside
their
cluster
and
dynamically
say.
Send
all
api
calls
here
during
the
run
of
the
sonoboy
plus
api
snoop
and
get
a
profile
like
you
say,
leave
this
up
and
running
and
run
all
your
important
stuff
and
then
upload
it
and
we'll
show
you
just
this
venn
diagram
of
stuff,
that's
not
tested
and
the
stuff
you
are
using.
E
A
A
F
A
Getting
to
choose
which
user
agents
you
want
to
do
in
the
report,
because
you
don't
want
everything
you
just
want
the
things
that
you
wrote
right.
Maybe
we
could
limit
it
to
a
namespace
and
then
say
these
are
the
pieces
of
software
using
things
because
it's
almost
like
for
debian
and
a
couple
other.
They
have
the
pr
the
the
popularity
contest
and
it
looks
sincere
installed
package
and
say
these
are
the
popular
things
in
this
case.
A
E
Yeah
yeah
I
mean
I'm
definitely
willing
to
you
know,
continue
this
conversation
to
see
why
centerboard
could
be
useful
upstream,
nothing
necessarily
specific
to
to
vmware,
but
definitely
something
useful
for
for
the
rest
of
the
community.
A
Is
the
the
the
the
people
and
the
source
for
that
that
dynamic
coupe
cuddle
apply
for
this,
that
that
sono
boy
style
thing,
because
it
was
a
yaml
deployment
of
sonoboy,
which
we
could
use
just
that
framework,
even
if
it's
not
sonoboy
or
use
it
as
a
base
to
change
the
way
that
sunubi
does
its
reporting
for
a
cncf
tool
for
conformance,
so
that
we
could
possibly
move
away
from
submissions
on
the
repo
or
or
speed
things
up.
So
you
just
run
the
tool
and
fill
in
the.
E
E
Once
you
have
that
yaml,
you
can
apply
it.
You
know
wherever
your
cluster
is
so
that
still
is
the
case,
but
I'm
very
interested
to
continue
this
conversation,
so
I
can
understand
the
use
cases
you're
mentioning.
A
I'm
glad
because
I
think
that's
going
to
be
very
useful
and
I
think
my
ask
for
for
within
the
vmware
is
to
go
find
the
website.
That's
that
I
don't
see
anymore,
that
didn't
run
sono
boy
for
you.
It
generated
the
yama
under
that
grid
that
had
a
status
page.
I
will
ask
around.
I
have
never
heard
that
that
is
some
good
stuff,
and
I
was
I
was.
A
I
actually
tried
to
base
our
work
on
that
approach
so
that
we
could
easily
onboard
other
people
to
show
them,
because
we
were
at
the
first
of
this
project
way
back
like
three
or
four
years
ago.
We
were
trying
to
figure
out
how
to
have
right
now
we're
worried
about
conformance
for
clusters,
but
what
about
performance
for
applications?
A
Because
then
you
could
go
through
and
run
this
and
check
your
user
agents
and
make
sure
your
user
agents
don't
hit
anything
outside
of
of
of
ga
right
right
right.
The
gaz
application
guaranteed
to
run
on
vmware
and
and
apple
and
google
and
all
the
other
cloud
events,
but
it
only
works
well
if
it's
super
easy
to
deploy
just
like
sono
boy
in
the
early
days,
didn't
need
really
anything
other
than
coop
cuddle.
I'd
go
to
the
web.
A
Click
on
generate
me
a
yaml
url,
and
then
that
deployment
included
uploading
it
back
to
that
website.
E
I
will
ask
her:
I've
never
seen
I've
never
heard,
but
then
again,
I'm
only
a
month
into
this
yeah
as
far
as
taking
over
sunnyboy.
But
all
I
could
say
is
right
now
it's
couple
things
sona
boy
is
doing
so
one
is,
you
can
still
generate
the
yaml,
but
it's
all
cli
and
the
other
thing
that
that
was
added
earlier
this
year
is
the
ability
for
cineboard
to
cons
or
to
create
plug-ins
for
sonaboy.
E
So
if
son
of
boy
as
if
sunboard
core
doesn't
do
something,
it
doesn't
meet
some
kind
of
requirements,
you
could
create
your
own
plug-in
for
it
to
do,
and
basically
the
plug-in
is
nothing
more
than
a
than
a
pod
that
gets
deployed
onto
kubernetes,
so
yeah.
I
think,
there's
definitely
some
some
synergy
to
get
something
done
here.
E
A
If
I
can't
find
it
because
it
was
definitely
what's
what
piqued
my
interest
around
around
sono
boy,
getting
really
broad
adoption
in
its
approach
for
in
getting
people
to
share
the
results
of
that
together,
because
it's
one
thing
to
run
it
and
get
the
results
locally.
But
then
we've
got
this
go
upload
to
github
process
and
you
can't
really
have
this
well,
what
do
you
do
with
your
cluster?
A
A
A
E
Away
from
the
machine
all
right
guys,
but
that
was
that
was
definitely
I'm
excited
to
to
continue
this
conversation,
to
figure
out
what
what
we
could
do
and
what
I
can
do
to
make
sona
boy
helpful
to
to
the
process.
Definitely
yeah.