►
From YouTube: Kubernetes SIG Testing 2019-01-29
Description
A
All
right,
hello,
everybody-
this
is
Eric
vada
I
am
in
for
Aaron
Quicken,
Berger
and
I
have
not
shaved
today
in
honor
of
a
severe
and
let's
see
yeah.
So
what
is
today
today
is
the
29th
of
January.
This
is
a
cig
testing
meeting.
This
will
go
up
on
YouTube
and
there's
a
code
of
conduct,
so
whatever
everybody
please
be
next
to
each
other,
and
let's
see
so.
The
first
topic
on
our
discussion
is
Glenn.
Who
wants
to
talk
about
Sig
testing
and
github
having
a
happy
powwow
together
so
Gwen?
B
Yeah
yeah,
so
I
was
trying
to
figure
out
internally,
who
would
be
the
best
people
for
testing
to
talk
to
inside
github
and
last
week,
I
finally
got
some
traction,
and
so
I
brought
Nick
to
the
meeting
today,
who
can
introduce
himself
and
kind
of
give
everyone
a
little
bit
more
for
background
I'm
envisioning
this
to
be
a
little
bit
of
a
first
contact,
because
I
just
really
hope
that
people
can
get
together
and
chat
and
I
also
series
on
the
call
who's
here,
also
from
github,
so
hi
good
to
see
you
and
yeah
I'll
just
go
ahead
and
ask
Nick
to
take
it
awesome.
C
Hi
everyone
as
introduced
I,
am
Nick
I.
Am
the
engineering
manager
for
what
we
call
the
ecosystem
API
team
here
at
github?
So
that's
the
team
kind
of
primarily
responsible
for
our
rest,
gradual
api's
I
also
happen
to
be
the
manager
of
the
team
that
is
responsible
for
our
web
hooks.
So
in
a
way,
I'm
kind
of
I
and
my
teams
are
kind
of
a
one-stop
shop
for
feedback
on
a
lot
of
the
public
facing
kind
of
properties
of
github.
C
So
I
was
invited
here
and
I
brought
in
Ruth
from
the
ecosystem,
API
team
as
well,
who
asked
us
to
so.
If
you
can
introduce
yourself
in
a
sec,
but
just
to
you
know,
get
some
first
contact
get
some
feedback
if
they
grow.
You
know
what
the
big
pain
points
in
the
EPA
are,
where
it's
good,
where
it's
bad.
What
we
can
improve
just
to
get
some
bi-directional
communication
going
from
the
two
of
us,
so
that
we
can
figure
out
how
to
make
the
API
better
and
stuff
like
that.
D
A
Great
well,
thank
you.
It
is
awesome
to
see
some
faces
too.
They
obviously
we
use
lots
of
web
hooks
and
do
all
sorts
of
crazy
stuff
with
our
automation.
And
so
then
our
you
know
you.
How
are
you
wanting
us
to
interact?
You
know,
did
you
want
this
to
be
like
you
know,
if
you
want
us
to
be
getting
the
details
now
or
are
you
gonna
be
on
our
slack
or
do
you
want
us
to
github
issues
or,
like
you
know,
a?
How
do
you
want
us
to
use?
You
know
this
meeting
here
and
be?
C
So
that's
kind
of
the
familiar
thing
for
us
to
plan
from,
but
you
know
kind
of
before
that
I
just
love
to
drill
into
you
know
like
where's
good,
where,
where
can
we
get
kind
of
the
best
ROI?
Let's
say
if
we
were
to
focus
about
our
time
on
improvements
or
focus
some
of
our
time
on
I'm
kind
of
diving
in
access?
You
know,
that's
that's
what's
most
important,
for
me,
I
guess
is:
where
can
we
maximize
the
return
by
starting
to
jump
in
on
some
of
the
spots?
I?
Think.
B
F
H
B
A
Yeah
I
mean
I,
would
see
you
know
four
broad
strokes,
I
would
say
in
general,
it
works
pretty
well,
I
would
say.
Maybe
there's
we've
been
running
into
a
little
issues
with
you
know,
occasionally,
webhooks
not
getting
delivered
and
that's
you
know,
obviously
the
part,
obviously
probably
ninety
nine
more
than
99%
of
them
are
delivered,
but
every
now
and
again
we
do
not
get
one
again.
You
know
we'll
get
a
comment
like.
A
Oh,
why
did
my
you
know
comments
not
result
in
whatever
the
thing
I
wanted
to
have
happen
and
I
have
to
send
it
again,
blah
blah
and
then
also
there
are
the
occasional
search
things
that
I
guess,
maybe
like
I,
think
Cole's
been
running
and
Steve
as
well,
and
his
deployment
run
into
some
search
issues
and
so
that'd
be
cool
to
you
know
if
we
could
sort
of
figure
out
ways
to
prioritize
that
and
in
future
issues,
that's
nonsense.
Super
awesome
just
discuss
that.
Maybe
more
detail.
Sometime
later
you.
C
H
C
Something
that
we
are
hoping
to
tackle
in
the
next
few
months,
I
can't
make
any
promises
on
delivery
time,
but
this
is
an
off
requested
thing
from
enterprises,
large
enterprises
to
individual
people.
You
know
we've
got
that
pretty
poor
functioning,
like
page
as
you
can
see,
on
the
repo
or
on
the
github,
app
or
whatever,
that
you
have
to
click
through
to
get
any
details
and
stuff
like
that.
C
Last
quarter
be
migrated
that
system
over
to
a
proper
database,
and
so
now
I
can
actually
query
it
based
on
status
based
on
you
know,
fields
at
the
body
based
on
success
and
failure
and
stuff
like
that.
So
we're
looking
to
do
both
increase
the
ability
to
filter
in
the
UI
and
also
offer
like
a
rest
and
gradual
api.
Hopefully
that
will
let
you
do
some
starting
some
basic
level
filtering
and
then
based
on
customer
feedback,
do
union
and
maybe
being
able
to
list
failures
from
a
time
period
and
stuff
like
that.
C
The
only
thing
that
we
probably
won't
scope
out
quite
yet
is
a
redelivery
api,
but
at
least
being
able
to
get
the
body
and
pull
the
body
down.
Would
at
least
let
people
start
to
list
the
failures
and
then
kind
of
go
based
on
on
feedback
there.
So
that's
kind
of
our
current
thinking
as
soon
as
there's
early
access
and
stuff
like
that,
you
know,
I'll
start
to
put
the
feelers
out
and
we
can
get
some
some
testing
and
stuff
like
that
to
make
sure
it
solves
Hales
needs.
C
E
E
Like
all
of
that
was
documented
and
reading
through
some
of
the
like
best
practices
or
like
whatever
that
page
is
saying
you
know,
we
definitely
don't
adhere
to
pretty
much
anything
on
that
page.
I
think
something
on
there
was
like
don't
make
more
than
one
API
request,
a
second
with
the
same
token,
and
so
I
think.
One
thing
that
would
be
useful
for
us
is
knowing
what
you're
expecting
of
us
from
an
API
consumer
and
making
sure
that
we
can
stay
within
those
bounds
so
that
we
don't
start
having
these
sorts
failures
of
our
end.
C
Yeah,
definitely,
oh
sorry,
good.
You
were
gonna,
say
something:
oh,
no
good,
yeah,
so
I
think
what
I
would
love
to
do
here.
This
is
something
that
I
find
a
high-bandwidth
conversation
on
our
end
is
really
useful
because,
depending
on
the
usage
patterns
and
stuff,
like
that,
there
are
a
myriad
of
different,
both
documented
and
undocumented,
for
security
purposes,
like
abuse
rate
limiters
beyond.
C
We
can
really
shape
the
advice
based
on
the
expectations
based
on
whether
it's
concurrency,
whether
it's
you
know
their
CPU
time,
whether
it's
a
lot
of
those
different
things,
and
we
generally
don't
tweak
those
sir
they're
pretty
unspeakable,
but
we
should
be
able
to
work
around
them,
usually
with
some
pretty
light.
Concurrency
controls-
or
you
know
back
off
or
something
like
that,
so
I
can
definitely
followup
with
that.
You
think
yeah
that
would
be
great
for
us.
C
A
Yes,
so
one
you
know,
echoing
what
Tim
and
Andrea
were
saying
you
know
is
yeah.
Definitely
awesome.
You
know
so
thanks
for
coming
here
and
engaging
with
us
I
think
you
know
right
now,
when
we
do
run
into
problems
like
I
feel,
like
our
new
best
thing
is
to
you
know,
go
pink,
win
and
so
having
you
know,
better
strategy
which
actually
works
fairly.
A
Well
so,
but
you
know
having
a
official
channel
of
communication,
I
think
will
be
super
great,
and
so
we
really
appreciate
that-
and
you
know
I
think
also
you
know
I
feel
like
this
should
be
a
two-way
street
and
I.
Think
that
if
there
are
you
know
architectural
changes
or
whatever
that
we
need
to
make
to
be.
You
know
better
github,
API
citizens,
I
think
we
would
be
interested
in
you
know
more
or
less.
We
are
trying
to
do
that.
A
B
I
think
I'm
Cole
brought
that
up
a
little
bit
on
his
dock
with
regards
to
actions
and
where
that's
headed,
I
think
everyone
in
sake,
testing
would
like
super
appreciate
some
guidance
on
what
the
right
direction
is
to
take
there.
Just
so
we
don't
like
back
ourselves
in
a
corner
or
duplicate
work
or
stuff
like
that
am
I
getting
that
right.
Eric
yeah,
totally.
A
Yeah,
that's
great
okay,
cool,
maybe
yeah.
So,
let's
see
jesso,
maybe
I
will
work
with
Cole
and
Nick
on
creating
a
follow-up
session
or
something
and
I
will
send
out
a
meeting.
Invite
for
that.
For
those
who
are
interested
in
diving
into
those
details.
H
C
Yeah
I
mean
I
love,
looking
at
the
the
big
uses
of
our
API.
It's
it's
always
a
humbling
experience
to
see
what
you
know.
People
doing
all
these
requests,
what
they
have
to
work
around,
what
tools
they've
built
because
we're
lacking.
So
even
if
we
don't
use
things
even
if
even
if
we
just
learn
stuff
I,
definitely
love
to
dig
in
kind
of
what
you've
done
to
be
successful
so
that
we
can.
Maybe
the
next
person
doesn't
have
to
spend
quite
as
much
effort
to
be
as
successful
as
you
all
are.
A
I
No
tar
balls
already.
So
the
main
thing,
of
course,
is
you
know
potentially
breaking
change,
because
now
I
need
to
figure
out
who's
using
kubernetes
tests.
You
know
for
our
CI
for
the
project.
It's
pretty
easy
to
fix
that,
because
you
know
we
control
everything
but
I,
don't
know
all
the
downstream
users,
and
so
I
tried
to
kind
of
find
some
of
those
and
identify
some
issues.
That's
I!
Guess
one
aspect
is
like
you
know.
It
seems
like
this
is
generally
an
accepted
idea.
At
least
most
looking
back.
I
I've
gotten
has
been
pretty
positive
about
this,
but
if
there
are
reasons
we
shouldn't
do
this,
that
would
be
great
to
know
and
if
there
are
like
you
know,
projects
are
not
projects,
but
consumers
of
the
this
tar
ball.
That
are,
you
know,
not
an
open
source.
I
can't
usually
find
they'd
be
great
to
kind
of
understand
how
they're
doing
things
that
how
I
kind
of
try
to
avoid
breaking
them,
and
so
the
cap
I've
also
kind
of
been
discussing
kind
of
site
aspect.
I
Is
you
know
what
is
the
timeline
you
want
to
do
this
and
kind
of?
How
do
we
want
to
try
to
keep
from
breaking
things?
You
know
one
plus
option
is
we
can
continue
to
reduce
this
mono
tarball,
along
with
the
split
ones
for
a
little
while
or
we
possibly
come
back
trained
his
back
or
we
can
just
make
a
clean
break
so
I
don't
know,
I
can
kind
of
have
a
thoughts
about
this.
Given.
J
The
nature
that
it's
a
release
artifact
and
we
probably
should
be
following
standard
procedures,
which
means
notify
with
two
plus
releases.
So
if
we
consider
that
to
be
standard
procedure
for
command
line
are
basically
forward-facing
artifacts
of
some
kind,
then
we
should
probably
follow
the
standard
pattern
that
exists
for
for
that,
even
though
it
doesn't
explicitly
call
out
artifact
generation
that
calls
out
like
how
do
how
do
consumers
of
api's
and
command
line
flags
expect
changes
in
a
what
time
frame
right.
E
I
Right
I
guess
we
could
have
a
compromise
where
the
efficient
release
maybe
still
builds
both,
and
we
can,
you
know,
aims
deprecated,
116
or
117,
whatever
the
right
number
is
but
like
for
CI,
we
could
you
know
we
can
just
update
our
system
and
not
use
the
mondo
tarballs.
That
seems
like
a
reasonable
compromise.
I
I
F
I
Make
sure
that
you
know
things
are
oh
yeah
I
want
I,
don't
want
to
create
any
undue
burden
on
anyone.
Yeah
there's
a
change
here,
but
I
don't
know
like
you
know,
I
was
a
fire
drill,
so
I
appreciate
you
commenting
and
helping
with
they're
helping.
You
understand
your
situation
as
well.
Thank
you,
I
think.
As
long
as
we
can
avoid.
E
F
Only
sorry
bid
go
ahead,
so
the
the
only
additional
note
that
I
had
would
be
it
because
I
know
I,
don't
want
to
get
rid
of
any
other
car
balls,
but
right
now,
for
example,
you
know
if
you
want
to
take
that
test
our
ball
and
do
something
with
it.
It's
not
necessarily
enough.
You
might
also
need
the
kubernetes
tar
ball
for
cluster.
If
we're
going
to
modify
the
actual
test,
tar
ball,
it
would
kind
of
be
nice
if
the
entirety
of
the
artifacts
needed
for
running
tests
or
reporting
on
this
test
was
in
there.
J
So
one
of
the
things
that
hasn't
been
done,
which
was
actually
like,
we
did
it
and
then
we
pulled
it
back,
which
should
probably
overlap
with
this
release
is
the
test
container.
We
actually
published
this
for
tests,
but
we
do
not
publish
it
as
part
of
the
release
so
for
broader
consumers
that
might
consume
the
the
tar
ball
today.
If
they
wanted
to
consume
the
container
which
we
are
have
all
the
tooling,
for
we
just
need
to
publish
it
and
release
and
have
the
you
know
the
overlapping
multi
arch
manifests
pushed.
So
we
do
that.
J
F
That
we're
done
it
for
what
it's
worth
I
actually
do
like
having
the
files
and
just
don't
have
to
break
apart
the
container,
primarily
because
some
of
the
tooling
that
I've
written
is
for
custom,
builds
that
you
can
do
tests
on
without
having
to
stage
them
first.
So
as
long
as
you
know,
it's
easy
to
get
those
tar
balls
I,
don't
want
to
take
it
away,
just
because
there
may
be
an
alternative
path
as
well.
I
think
like
an
addition
to
is,
is
okay
here.
A
Alright,
thanks
Jeff
and
everybody
for
chatting
about
that.
Next
up,
the
queue
con
EU
is
coming
up
and
as
a
sig,
we
get
to
decide
whether
we
want
to
do
a
deep
dive
or
an
intra
session
or
both.
Historically,
we
have
done
both
we
can
have.
You
know
two
people
present
for
each
and
I
guess.
We
can
also
ask
for
a
working
session
with
four
speakers.
A
A
F
So
this
actually
is
from,
let
me
see
if
he
jumped
on
for
a
new
colleague
of
mine,
he's
been
with
us
for
a
week,
Andrew
Kim,
and
so
that's
that's
funny.
Zon
he's
actually
on
my
team,
so
I
I
told
people
just
want
my
team
to
call
me
cook.
That's
all
anyone
ever
called
me
growing
up,
but
thanks
and
so
I'm
aunt
Andrew
Kim
reached
out
to
me
and
one
of
our
slack
channels,
saying:
hey
I've
defined
this
label
called
cloud
provider
for
feature
tests
on.
F
You
can
see
the
link
tissue
there
and
he
asked
you
know
what
were
my
thoughts
on
labels
for
maybe
specifically
the
vSphere
cloud
provider
because
he
was
coming
at
it
from
a
sig,
sig
cloud
provider
level,
and
so
a
couple
of
thoughts
on
that
I
was.
We
were
both
wondering.
If
one
is
this
redundant,
you
know,
should
the
sig
ownership
be
enough
to
denote
which
tests
are
for
a
particular
provider
but
at
the
same
time
I
was
wondering.
Is
there
an
opportunity
here
for
maybe
suggesting
to
get
the
Sigma
spider
seed,
cluster
lifecycle
and
I?
F
Guess
CSIS
understood
cloud
provider
to
standardize
on
some
type
of
label
for
provider
tests,
whether
they're
internal
or
external
providers,
so
that
if
somebody
wants
to
run
all
provider
tests
again,
internal
external
there's,
some
label
and
and
I
kind
of
outline.
My
thoughts
on
here,
you
know
the
signal
ownership
labels
may
be
enough,
but
I
did
I
did
mention
that,
just
because
it's
owned
by
sig
cloud
provider,
it
may
not
necessarily
be
a
cloud
provider
test
might
be
rare,
but.
G
J
This
overlaps
with
the
conformance
profiles,
at
least
for
integration
points,
so
the
standard
cover
station
piece
that's
been
had
many
times
at,
and
it's
it's
funny
that
Aaron's
not
here,
because
there's
a
lot
Pease.
We
could
go
on
about
this,
but
the
standard
practice
that
people
have
wanted
to
do
for
integrations
such
as
CSI
another
cloud
provider,
aspects
of
integration
that
there
was
supposed
to
be
a
quote,
unquote
profile
right
and
they
were
gonna
call
it
conformance
profiles,
and
there
was
even
like
a
layered
tier
of
labelings
that
people
wanted
to
make.
J
We
started
the
conversation,
but
it
like
no
one
has
wanted
to
enter
into
it,
because
the
current
performance
suite
is
not
good
enough
for
most
people
today.
So
there
was
going
to
be
the
second
stage
of
conversations.
I
think
the
question
we've
had
is
like
resourcing
to
get
this
done
right
and
people
to
actually
execute
on
a
well-defined
set
of
tasks.
So
the
short
answer
is
people
have
talked
about
it.
J
F
J
E
B
E
F
I,
like
I,
said
I
know
his
goal
was
leased
from
sue
cloud
provider
to
create
a
test
suite
because
he
felt
the
tests
were
lacking
to
sort
of
validate
where
they
are
today
with
the
entry
and
then
as
people
moving
a
totry
vendor
agnostic
set
of
tests.
That
can
say
this
still
works
as
it
used
to
work.
Based
on
the
this
test
suite-
and
you
know
he
was
trying
to
say
what
should
we
use
this
label
yeah?
This
is
a
deeper
conversation,
yeah.
J
A
I
mean
another
thing
to
potentially
think
about.
Is
you
know
some
at
various
points
we've
had
discussions
about
our?
Is
there
better
way
of
adding
metadata
and
selecting
tests
other
than
you
know,
adding
it?
Is
there
a
better
way
to
have
metadata
and
select?
You
know
a
suite
based
off
of
that,
and
is
that,
like
what
profiles
his
profile
is
targeting
that
at
all
Tim,
but,
like
you
know
one
one
could
imagine
that's
you
know
if
you
want
to
get
very
specific
about.
A
Oh
this
test
works
for
providers,
a
B
and
C,
but
not
D
or
whatever
you
could
wind
up
with
an
extremely
long
test
name.
If
that
is
the
only
mechanism
we
have
for
adding
metadata
to
test
and
using
that
to
select
off
of
so
at
some
point,
it
might
be
interesting
for
this
group
to
start
thinking
about
you
know.
Is
there
a
better
way
for
us
to
associate
test
with
metadata?
That
is.
J
An
ongoing
conversation
that
I
poked
I
really
want
that
for
extension
points
inside
of
the
API
so
like
that
profiles
is
like
here's
conformance,
CN,
CF,
conformance
and
then
outside
of
that
layer.
This
would
be
like
a
thing
like
profiles:
whether
or
not
a
provider
meets
a
given
profile
and
it'd
be
a
suite
of
tests
that
are
common
to
all.
J
The
capabilities
of
detecting
based
on
metadata
would
be
extensions
like
CSI
metadata
would
be
a
CSI
CNI,
dot,
dot,
dot
things
that
are
extension
points
to
the
main
API
to
auto,
detect
and
then
tests
based
upon
the
detection
of
that
information.
I
know,
Patrick
is
on,
and
we've
briefly
discussed
that.
A
A
E
E
Simple
quickly,
there's
only
a
couple
things
here
and
they're
kind
of
time,
critical
or
at
least
that
I
want
to
wait
another
week,
I
guess
the
two
things
that
I'd
like
you
to
know,
so
we
are
removed
in
run
after
success
very
soon,
if
you're
running
a
cluster-
and
you
have
those
to
find
start
to
keep
on
you
get
rid
of
those.
There
is
a
large
refactor
going
into
extra
green
behavior
works.
There
is
a
link
here
to
an
overview
of
the
changes
tonight,
appreciate.
E
One
thing
essentially
maybe
been
in
company
if
they
can
stay
on
the
call
after
1:30
I
do
want
to
understand
the
whole
bit
better
was
left
for
removing,
where,
after
success
on
the
product,
cakes
that
I
am
clustered,
but
we
don't
need
to
hold
you
there
for
that.
I
just
want
to
appreciate
that
better,
so
we
can
have
tracked
music
for
it.
E
E
E
I,
don't
think
it's
I,
don't
think
it's
possible
for
us
to
serialize
to
a
3
V
one
type
now
I
mean
we
can
just
make
that
as
a
breaking
change.
A
Here
we
don't
like
immediate
time
box
this
we're
sort
of
running
a
little
bit
over
and
so
things
everybody
for
coming.
I
can't
keep
this
open
a
little
bit,
but
I
mean
whatever
the
most
part
thanks
everybody
for
attending
the
meeting
that
this
week
and
if
you
are
interested
in
continuing
to
hash
out
some
things
about
run
after
success,
feel
free
to
stay
on
for
a
couple
more
minutes.
A
A
E
F
L
E
Yeah
and
the
config
stuff
I
think
is-
is
doubly
confusing
for
us
too,
because
those
aren't
actually
Cuban
exercisers
great,
like
you
really
spike
types
and
we
don't
have
them
version
today
and
I've
been
saying
that
we
should
have
an
internal
and
external
type
for
a
very
long
time.
So
there's
a
lot
of
places
where
we
have
external
companies
that
contains
user-facing
fields
and
then
all
of
our
structs
have
on
exported
members
that
are
parsed
out
or
somehow
ingested
or
validated.
Yeah
I
agree.
So
is
there
anything
else.
E
Well,
and
not
to
I,
don't
mean
this
in
a
I,
don't
mean
this
is
like
a
calling
out
sort
of
way,
but
the
for
instance
that
change
to
reporting
for
most
events
like
since
that
was
added,
as
the
user
facing
change
to
the
config,
with
the
express
intent
event
changing
it
later,
I
think
that
might
need
to
actually
go
through
that
process.
I
had
stable,
where
we
cut
cake,
stable
release
well,
underneath
necessary
thinking.
You
should
go
straight
up
stable
though
we
could
have.
We
use
treatment
like
I,
don't.
A
E
A
E
E
E
Also
have
been
abused
or
something
like
a
while
back
I
made
a
breaking
change.
Those
continuous
but
not
like
mounting
the
default
account
docpods
using
the
property
continues
to
be,
is
and
versioning.
What
I
could
have
done
instead
is
introduced.
A
new
version
of
the
API
that
had
that
behavior
and
you
can
right,
but
that
sort
of
thing
just
makes
it
a
lot
easier
to
say
we're
making
the
change
exists
in
the
code,
but
you
may
be
like,
but
you
need
to
because.
E
And
I
agree
before
we
start
using
this
issue.
We
do
need
to
think
about.
Are
there
other
things
that
we
absolutely
are
not
interested
in
having
a
migration
path
for
and
I
think
due
to
the
complexity
run
after
success
was
unfortunately
one
of
those
rights?
Not
only
was
it
not
reasonable
to
have
a
migration
path,
but
the
thing
is
like
barely
support
a
residence,
the
other
direction
I'm
headed
with
this,
is,
if
we
can
move
this
would
be
a
really
good
thing
to
do,
because
then
people
can
stop.
M
E
E
A
E
E
L
A
Mean
we
are
in
control
of
when
we
deploy
proud,
Casey.
Oh
I
would
be
totally
okay
with
just
merging
the
deletion
that
run
after
success
and
holding
off
on
updating
prowl.
Until
we
resolve
all
the
job
issues,
I
mean
because
everybody
else
is
essentially
going
to
have
to
do.
The
same
thing.
I
think
that
I
would
be
personally
totally
fine.
K
L
E
L
E
L
A
E
I
had
some
questions
for
him,
because
the
the
kubernetes
these
second
humanity's
client
flag
implementation,
was
added
by
him
and
I
can
validate
that.
None
of
the
proud
employments
I
know
are
using
those
flags,
I,
don't
know
if
they're
using
your
deployment
or
if
they
have
their
own
deployment
and
I
would
like
to
know
if
they're
using
Hubble.