►
From YouTube: July Agones Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Going
on
all
right,
we
are
recording
welcome
again
to
the
july
23rd
episode
of
the
community
meeting
for
the
agonist
project,
I'm
doing
my
podcaster
voice
because
I
know
we're
recording
so
stuff.
We
want
to
talk
about
stephen,
what's
been
happening
this
week
that
I
haven't
seen
because
I've
been
on
holiday.
B
Yeah,
it
just
seems
quite
a
few
test
failures,
but
not
related
to
the
changes,
so
I
think
actually
bro
yeah
robbie
had
some
identified
some
of
them.
I
think
the
static
static
resources,
I
think
some
of
the
c
plus
build
section-
was
failing
as
well,
oh
fun
but
yeah.
That
seems
it
always
seems
unrelated
to
the
change.
A
We
have
had
some
flakiness
for
sure
recently.
I
thought
I
fixed
a
bunch
of
it
before
last
release,
but
if
there's
more
stuff
do
we
have
tickets
for
some
of
this
flakiness,
that's
showing
up
or
individual
things.
C
B
I
think
well,
the
good
news
is,
is
it's
new
new
tests
that
are
failing
so.
A
B
A
C
A
Sorry,
what
was
I
gonna
say
stephen.
Do
you
some
of
that's
hard
around
like
how
can
we.
B
A
That's
a
good
question.
I
know
I
have
like
certain
habits
that
I
have
for.
Like
end-to-end
tests
are
usually
like
really
easy.
Those
ones
can
be
the
flaky
ones.
Just
because,
like
I
know,
to
search
for
fail
colon
inside
those
to
find
the
test,
that's
flaking
inside
there
and
I'm
like.
A
Oh
it's
that
specific
test,
but,
like
sometimes
I
know,
like
conformance
tests
from
me,
I
often
find
like
I
know,
they've
been
kind
of
flaky
recently
on
certain
things,
but
trying
to
work
out
which
one
failed
and
how
we've
done
some
updates
on
that.
But
it
can't
it's
not
necessarily
obvious
on
that
for
sure
about
why
it's
failed
or
or
how
we
have
alex
you.
You
are
one
of
those
there's
a
there
was
a.
There
was
a
ticket
for,
like
conformance.
C
Yeah
I
seen
not
not
sdk
conformance
was
failing
today.
A
A
A
B
A
D
A
C
All
right
probably
to
emulate
this,
so
we
should
have
similar,
vm
or
and
run
it
20
times.
Hopefully
it
would
fly
a
beef
like
a
game.
I
mean
true,
similar
block
reproduce
it's
usually
hard
on
pc,
possibly
on
vm.
It
would
be
easier.
A
Yeah,
some
of
them
are
rough
too.
I
feel
like
some
of
them.
I've
run
into
where
it
it's
sort
of
like
when
this
thing
happens,
to
be
running
at
the
same
time
as
this
thing,
then
they
happen
to
collide
in
a
certain
way,
but
if
you
run
it
in
isolation,
then
it
never
never
fails
or
the
cpu
is
getting
overloaded
by
this
much,
which
has
made
this
thing
slower,
which
means
that
this
finally
kicks
in,
but
sometimes
it
doesn't
another
so
yeah,
that's
well!
That
can
be
really
tough.
A
It's
running
on
a
we
bumped
it
up
to
32
core.
What
is
it
running
now?
A
Shaved
some
time
off,
because
we
do
so
much
here's
an
actually
here's
an
interesting
question
that
I'll
ask
we've
I've
kind
of
if
we.
A
Is
this
a
good
reason
to
maybe
split
start
to
split
things
either
into
their
own
repos
or
their
own,
a
good
reason
to
split
things
into
actually,
maybe
into
their
own
cloud,
build
steps
which
is
before
the
step
we
need
to
do
before,
probably
with
conditional.
C
A
I
mean
I'm
thinking
like
first
step
is
we
can
maintain
the
monorepo,
but
we
could
have
multiple
cloud
builds
that
run
with
some
of
them
being
conditional,
and
then
that
would
let
us
to
also
fluff
out,
because
then
once
we've
split
that
infrastructure
out.
That
would
actually
be
a
lot
easier
if
that
makes
sense
so
like,
if,
like
a
classic,
I
think
example
would
be
like.
A
If
the
site's
not
updated,
then
don't
run
those
tests,
which
would
mean
then
you
know
if
steven's
doing
stuff
on
ogs,
then
there's
less
chance,
there's
less
chance
for
flakiness,
because
there's
less
things
that
are
happening.
A
There
are
pros
and
cons,
okay,
money
repeal
and
like
pulling
some
stuff
out
too,
but
yeah.
No,
so
like
what
was
I
gonna
say
that
could
also
so
there's
there's
right
now,
like
a
gone
is
like
the
ghana
ci
pipeline.
It's
like
a
bunch
of
like
glue
and
string
that
makes
all
the
sync
with
the
github
repos
happen,
because
it
was
all
built
before
the
actual
real.
A
A
One
of
the
nice
features
it
does
have
is
that
it
has
the
ability
to
say,
run
these
tests
if
someone
who's
an
owner,
writes
like
slash,
run
test
or
something
in
a
comment
so
like,
for
example,
you
were
talking
about
the
terraform
tests
alex
and
like
the
perpetually
dangerous
we
could
have
that
set
up,
but
with
like
an
approval
step
and
have
that
be
separate,
even
if
we
keep
the
monorepo
just
to
be
able
to
do
that.
So
there
could
be
some
nice
stuff.
We
could
use
every
star
to
put
that
up.
A
We
might
lose
a
couple
of
things
that,
like
doing
some
of
the
status,
checking
and
stuff
would
be
kind
of
maybe
a
little
bit
trickier,
but
we
can
we
can
step.
We
can
worry
about
that
later.
That
might
be.
That
might
be.
A
good
input
is
to
just
start
splitting
that
stuff
out,
and
then
we
can.
We
can
start.
We
can
start
either
looking
at
splitting
that
out
or
having
different
different
workflows
for
for
each
one.
D
B
I
like
good
stuff,
we
should
do
all
the
conditional
trigger
within
a
folder.
Is
that
what
you're
thinking.
A
Yeah,
so
you
can,
you
can
have
stuff
like
we
do
it,
you
do
it
where
in
you
can
say,
like
hey.
I
only
run
this
branch
if
files
in
this
folder
that
match
this.
A
This
pattern
like
work,
so
you
could
say
like
if
there
wasn't
like,
for
example,
we
don't
deploy
the
site
unless
there
is
a
change
to
the
site
like
we
have
that
continuous
process
running
in
the
background
that
pushes
to
the
develop
the
develop
subdomain
so
like
that,
doesn't
work
when
we've
got
the
big
cloud
build
script
because
you
can't
be
like
don't
run
this
step,
that's
just
not
a
functionality
we
have,
but
you
could
do
it
on
individual
cloud,
build
steps
and
then
they
could
also
run
in
parallel.
A
lot
nicer.
A
The
only
thing
there
would
be
that
we
would
end
up
with
multiple
things
inside
our
like
a
git
pr,
where
you'd
see
all
the
different
ones
and
it's
much
harder
than
to
do
the
thing
where,
like
we
drop
a
comment
on
so
you
get
the
email
and
the
notification,
but
we
get
some
other
benefits.
So
there's
pros
and
cons.
A
Yeah
then,
you
could
actually
do
the
stuff
where,
like
the
site
tests,
would
look
for
those
files
inside
if
they
get
updated,
then
it
ran.
And
then
you
would
see
that
as
a
as
an
item
openmatch
does
that
they
use
they
use
that
they
use
that
that
system
and
that
works
pretty
well
for
them.
A
It's
also
a
little
harder.
I
don't
know
about
within
github.
You
have
certain
statuses
that
are
so
like
the
ci
runs
is
like
required
for
this
branch,
so
there
might
need
to
be
a
little
bit
more
manual,
checking
or
manual
knowledge
of
certain
approvals,
like
certain
steps
are
running
for
certain
things
like
you
need
to.
We
couldn't
say,
for
everything,
make
sure
the
site
tests
ran,
because
for
some
prs
there
weren't
that
won't
exist,
and
I
don't
know
whether
github
will
handle
that
they
might
just
be
like
they
won't
run.
A
So
you
can't
merge
and
you're
like,
but
it's
not
important,
so
we
couldn't
set
those
as
required.
For
example,
you
would
be
up
to
the
approver
to
be
able
to
to
say,
oh,
that
that
should
have
passed
or
shouldn't
have
passed
or
something
like
that.
But
one
problem
at
a
time,
but
it
does
solve
some
other
some
other
issues,
particularly
around
flakiness
and
stuff,
like
that.
E
A
The
other
thing
that
could
be
nice
about
splitting
things
up,
I
think
as
well,
is
say,
for
example,
you
want
to
do
you
could
also
we
like.
A
We
have
that
big
docker
image
that
big
dev
docker
image,
if
you
want
to
do
side
stuff,
only
a
much
smaller
image
and
like
there's
a
sub
image
for
that,
that's
the
one
that
gets
built
for
that
rather
than
getting
everything
you
could
do
some
nice
stuff
that
way,
but
we
write
a
ticket
for
that
and
get
stuff
get
stuff
pulled
out
a
little
bit
cool
all
right.
Anything
else
about
flakiness
stuff.
A
Patches
for
sdks
yeah,
that's
another
thing
as
well
like
we
could
do
separate
sdk
releases.
B
B
Yes,
I
was
wondering
if
helm
3
is
now
okay
to
use.
A
A
You'll
see
that
it
now
has
helm
3
commands
all
set
up.
All
the
local
development
tooling
now
runs
on
helm3
as
well,
and
we
now
have
a
helm3
terraform
provider
as
well.
A
So
nice
I'm
a
fan.
A
A
I
recommend
the
nice
thing
if
anyone
isn't
playing
with
home,
three
is:
if
you
already
have
tiller
installed
or
home
to
install,
you
can
actually
run
them
side
by
side,
just
make
sure
to
remove
the
gun,
it's
installation,
because
otherwise
you
won't
be
able
to
install
the
other
one
and
they'll
collide.
I've
actually
done
that
and
things
break,
but
you
can
just
I
mean
if
you
want
to
just
leave
tiller
there
or
something
like
that
I'll
leave
that
helmet
to
anything
you're
doing
with
home
2.
A
You
can
still
have
them
live
side
by
side.
As
long
as
you
haven't
got
conflicting
resources,
which
I
thought
was
a
nice
touch,
but
yeah
we
didn't
have
to
change
the
chart
or
anything
it
all
just
works.
So
they
did
a
really
nice
job
and.
B
What
is
how
to,
I
guess
it's
still
supported,
but
is
there
a
like?
Will
there
be
a
point
where
we
remove
support
that.
A
A
I'll
say
not
for
the
near
future:
no
plans
yet
to
move
away
from
home
to
chart
format,
because
that's
the
only
thing
that
is
tying
us
to
come
to
our
home
three
home
three
has
you
can
do
there's
a
is
there
a
helm3
chart,
format.
E
E
A
Yeah,
I
don't
see
us
moving
anytime
soon.
I
think
we
would
pull
the
community
first
and
see
if
there's
anyone
still
using
home
too,
and
if
they
are,
then
we
probably
wouldn't
wouldn't
touch
it
for
a
little
while
there's.
No,
I
don't
there's
nothing.
I
don't
think
we've
run
into
anything.
Yet
it's
been
like.
Yes,
you
must
move
the
new
version,
because
unless
home
three
makes
that
decision
at
some
point.
B
Cool
steven,
it's
all
you
today,
yeah
sorry,
last
last
one
yeah,
it's
fine!
I
did
last
minute.
Well
I
mean
I'm
just
raising
it.
Yeah
robert
clicked
the
button
and
turned
on
the
defender
bot.
So
it's
prefer
the
no
no
js.
I
think
that's
just
it's
just
for
the
no
dependencies
always
yeah.
I
haven't
seen
it
for
anything.
B
But
then
we
kind
of
got
scared
because
it,
I
think
when
we
updated
the
package.json,
we
changed
the
version
to
dev
and
had
we
also
run
npm
install,
it
would
have
updated
package
log,
which
I
mean
it's
fine.
It's
not
publishing
it
anywhere
anyway,
it's
just
what's
in
there
in
in
the
repo.
B
So
so
I
think
that's
fine,
but
I
think
it
was
just
we
were
expecting
only
the
you
know
the
dependency
update
to
happen
when
we
depend
what
did
its
thing,
but
so
we
have
the
other
pull
request
for
that,
but
I
think
I
think
it
will
be
fine.
I
think
dependable,
we'll
keep
we'll,
probably
keep
that
as
well.
We'll
probably
see
it
for
yeah
other
updates
that
it
makes.
F
Yeah,
I
guess
the
other
point
about
that,
though
steven
is
like
your
pr
for
alpha
player
tracking,
also
fixes
the
issue
that
dependent
box
was
trying
to
fix
that
that
one's
just
it
hasn't
merged.
Yet,
even
though
it's
been
open
for
over
20
days,
so
if
we
were
doing
a
quicker,
quicker
review
and
merge
cycle
on
that,
we
wouldn't
have
had
the
security
in
the
first
place.
B
B
C
By
the
way,
what
some
in
npm
changes
happens
after
this
automatically
after
this
low
dash
update.
B
Yeah,
oh
yeah,
that's
a
good
point.
The
I
mean
yeah.
The
vulnerability
is
still
there
on
that
published
module,
but
I
think
anyone
using
that
as
long
as
we
have
as
long
as
that
depends
is
not
pinned
in
our
module,
then
they
can
do
npm.
C
B
C
B
Just
they
might
get
a
warning,
but
the
ones
that
we
don't
the
ones
that
we
do
pin
like
the
the
regular
dependencies.
B
Those
would
would
flag
up,
and
I
I
don't
know
I
think
github-
should
alert
us
to
that
fact.
I
don't
know
who
that
notification
would
go
to
like.
If
we
have
a.
If
we
have
a
pin
dependency
that
needs
updating,
then
there
should
be
an
alert.
A
E
E
D
A
A
A
Security
there's
a
code
scanning.
A
A
A
Under
the
configuration
file
too
nifty,
okay,
cool,
that's
good,
find,
and
you
can
tell
it
when
you
want
it
to
do
it
as
well.
A
Nifty
cool
upgrade
to
kubernetes
116
is
probably
my
next
one,
mainly
around
making
sure
that
we
have
time
to
do
this.
F
So
I
think
there
were
some
bigger
changes
in
116
that,
like
I
think
apis
were
going
away.
I
think
last
I
checked
some
of
our
helm
stuff
still
referenced
like
beta
versions,
yeah
apis
that
are
disappearing.
So
I
think
it's
this
one's
a
little
bit
more
work
to
do
than
some
of
the
other
upgrades
and
the
I
know
like
the
newer
at
the
head.
End
of
kubernetes
releases
are
slowing
down
due
to
covid
so
yeah.
A
D
D
D
There
we
go.
Yes,
it's.
E
E
D
A
That
we're
115
now.
D
D
A
It's
totally
fine,
I'm
not
in
any
rush.
I
just
wanted
to
bring
it
up
to
make
sure
we
were
safe.
A
D
F
F
Yep,
so
one
thing
we
could
do
for
this
release.
If
we
don't
want
to
upgrade
to
116
is
we
could
go
through
the
helm,
stuff
and
move
all
those
versions
to
v1
work
for
to
go
to
116
would
be
smaller,
and
that
would
be
like
a
delta.
We
could
do
that
should
have
basically
zero
impact.
That
would
help
use
the
work
for
the
next
time
and
that
that
seems
much
more
reasonable
to
do
in
two
and
a
half
weeks,
yep
yeah,
I.
F
A
Don't
think
there's
anything
outside
of
crds
and
maybe
the
api
extensions
they
may
even
be
separate.
A
F
If
you
just
search
for
b1
b1
beta1
in
our
repo
there's
a
whole
bunch
of
places
in
our
code
like
in
golang,
where
we
used
the
v1
beta1
api,
we
might
want
to
start
migrating
those
over
to
v1.
Also,
I
don't
think
any
of
those
ones
necessarily
are
being
dropped
in
116,
but
we
can
expect
that
they
will
get
dropped
at
some
point
in
the
future
and
if
there
is
a
stable
version,
it
would
be
good
to
move
over
yep.
A
Let's
make
a
note
of
that,
have
you
got?
Have
you
got
that
that
that
search
handy?
Do
you
wanna
drop
that
in
the
in
the
notes?
Sure
that's
probably
a
good
thing
to
have.
A
F
A
F
This
gets
fun,
yeah,
usually
there's
a
decent
amount
of
overlap
from
when
things
get
promoted
to
be
one,
and
then
the
beta
stuff
gets
dropped
yeah,
so
I
mean
we
should
definitely
look
at
it
and
see.
If
there
are
places
where
we're
where
we
can
start
moving
to
the
stable
apis,
then
we
won't
have
sort
of
the
cliff
of
like.
If
we
don't
do
it
now.
The
next
version
doesn't
work.
D
F
A
F
Right,
so
that's
a
great
question,
so
I
think
this
is
where,
like
they're,
some
of
the
stuff
is
pretty
easy
to
bump
up,
and
some
of
the
stuff
is
going
to
be
a
little
trickier,
and
this
is
where
you
know
when
you
said
two
and
a
half
weeks,
it
looked
like
that
was
not
going
to
be
enough
time
for
us
to
dig
through
and
untangle.
All
of
this
stuff
are
some
steps
we
can
take
into
two
and
a
half
weeks
to
have
us
less
work
for
the
next
release
cycle.
B
C
A
A
nothing's
been
deprecated
or
removed
so
like,
if
that
like,
if
we
can
upgrade
our
stuff
to
116
and
start
testing
against
it.
Maybe
that's
a
good
first
step
because
then,
once
we
do
that,
then
in
the
next
release,
then
we
can
say,
like
I
don't
know,
maybe
that's
a
bad
idea.
I
was
going
to
say
we
we
upgrade
our
end
to
end
clusters
against
116,
and
then
we
know
that
it's
starting
to
work,
but
I
don't
know
if
that's
going
to
start
to
fail
for
this
release
that
we're
like
still
supporting
115..
A
I
don't
know
that's
actually
this,
isn't
it
I'm
sure
or
we
hold
off
on
it
all
until
next
release
and
start
with
upgrading
to
116
or
doing
some
like
non
non-end-to-end
test
changes,
but
just
testing
testing
locally
against
116..
That
makes
more
sense.
Maybe.
F
So
so
there
are
two
api
type
changes
we
should
be
aware
of.
One
is
things
that
are
graduating
to
ga
and
for
those
things
maybe
it's
fine
to
hold
off.
The
other
is
the
things
that
were
beta
that
are
no
longer
going
to
be
served
by
default,
and
those
are
things
you
have
to
do
for
116
and
those
are
apis
that
were
ga
for
a
long
time.
We
just
never
managed
to
change
our
references,
so
those
ones
should
all
be
safe
and
no
big
deal.
F
A
Oh,
I
see
you're
like
I'm
just
looking
at
the
stuff
that
you
put
in
there
about
like
apps
v1
networking,
v1
policy
v1
that
stuff
that
needs
to
be
moved
like
yeah.
We
can
do
that.
First,
okay,
got
it.
F
Yeah
exactly
and
the
other
stuff
we
don't
have
to
do
for
116,
but
once
we're
on
116
we
can.
We
can
start
moving
the
references
over.
A
F
A
I
remember
we
were
running
out
like
a
beta
one
deployment
for
like
a
super
long
time
a
while
ago,
but
I
think
that
I
think
that's
gone
now.
All
right
we
have
ten
minutes
left.
I
will
stop
for
barricading
alex
you
had
a
couple
of
extra
stuff.
You
want
to
talk
about.
C
Sorry
so
the
first
one
is
about
limiting
access
inside
the
cluster.
So
currently
we
have
cluster
role.
Possibly
we
can
use
name
spaces
and
a
guns
controller
would
be
allowed
to
use
on
to
control
ports
only
in
some
name
spaces
I
mean
default
as
a
default,
one
or
other
like
an
example.
Xbox
pc,
yes,
so
is
this
should
be
this
prioritized
somehow
or
to
so.
We
should
have.
A
A
I
mean
the
the
ticket
makes
sense
to
me
in
that,
like
more
security,
better
and
we're
already
specifying
which
namespaces
people
have
access
to
anyway
in
our
in
our
chart,
because
we
have
to
set
up
service
accounts,
so
it
doesn't
sound
like
it's
actually
a
changing
configuration,
it's
just
more
of
a
like
more
of
a
restriction
of
like
security
stuff.
I
don't
see
it
as
a
bad
thing.
I
think
if
we
can
try
and
work
out
how
to
do
it
like
we're
talking
about
on
the
ticket
about.
A
If
we
can
do
an
api
surface,
that's
the
same,
then
that's
less
changes,
more
changes,
just
a
good
thing
to
do.
I
don't
have
any
issues
with
it.
C
Yeah,
so
in
the
next
smaller
one,
do
you
have
some
process
some
meeting
or
like
question?
What
kind
of
how
to
pin
most
available
tickets
or
have
a
dog
with
all
variable
and
most
important
tickets?
We
should
focus
on
in
this
release.
A
A
C
By
the
way,
I
see
that
there
are
lots
of
versions-
probably
we
can
have
some
one
a
place
which
holds
version.
For
example,
of
golank.
We
manually
update
every
file.
Somehow
we
can
use
docker
variables
or
arguments
yeah.
I
don't
know
what
the
good
answer
through
that.
A
I've
seen
that
too
yeah
we
have,
I
think
we've
actually
got
only
one
version
now
of
go
across
all
the
things.
I
think.
Maybe
it's
a
little
easier
with
find
and
replace.
I
don't
know
what
the
good
answer
is
there
in
some
places
it
I
think
it
makes
sense
to
have
it
statically
defined.
Like
say
in,
like
the
examples,
because
then
it's
easy
to
understand
what's
happening
but
yeah.
I
totally
hear
you
on
like
that
kind
of
stuff.
A
C
Yeah,
I
will
write
this
down.
That's
a
good.
A
A
A
A
Does
anyone
have
anything
else
on
the
list
pune?
Did
you
want
to
talk
about
anything
with
the
certs
and
multi-cluster
allocation
stuff
at
all.
G
So
there
was
an
issue
that
I
opened
for
adding
a
default
client
certificate
for
a
multi-cluster
allocation
to
simplify
the
process
of
using
the
multi-cluster
allocation,
and
then
few
days
after
there
was
somebody
who
also
opened
a
similar
issue
saying
that
yeah.
If
there
was
a
deep
there,
was
a
certificate,
then
it
would
be
easier
to
use
it.
So
that
confirm
confirmed
kind
of
my
concern.
So
nikhil
he's
not
on
this
call,
but
he
went
ahead
and
fixed
that
he
actually
made
a
change.
G
And
then
there
is
another
ticket
opened.
That
was
asking
for
a
feature
kind
of
feature
for
having
a
limit
on
the
amount
of
time,
retry
logic
and
the
multi-cluster
retries
for
making
an
allocation.
G
Because
if
a
cluster,
when
a
cluster
is
down
or
the
the
the
default
grpc
timeout
is
set
to
20
seconds
and
it
tries
for
seven
seven
times.
So
it
would
take
two
minutes
before
the
allocation
failover
to
the
second
cluster,
and
it
would
be
better
to
provide
a
configuration,
a
home
configuration
to
actually
set
that
timeout
and
reduce
the
grpc
time
up
to
probably
10
seconds.
G
That
was
another
issue
open
the
third
issue.
That
was
that
I'm
still
not
sure
what
to
do
about
that
was
around.
So
if
there
is
a
home
configuration
for
generate
tls
for
the
allocator
service,
the
standalone
data
plane
service
that
we
have
not
the
basically
the
the
one
that
used
this
load
balancer
by
default
and
when
the
generate
ls
is
not
set,
then
it
defaults
to
a
certificate
from
the
repository.
G
It
is
expected
that
those
certificates
are
changed
by
the
customer,
but
if
somebody
mistakenly
used
those
thirty
or
installed
yamo
the
the
issues
and
the
concern-
and
the
issue
was
that
that's
because
those
certificates
are
already
checked
in
there
were
some
concerns
about
that
which,
I
don't
think,
is
a
concern,
because
those
certificates
are
not
really
usable
and
therefore
they
are
for
service,
like
if
another
service
tries
to
impersonate
that
service.
Yes,
that
that
would
be
a
concern,
but
because
they're,
not
very
usable
they're
dummy
certificates,
it
should
be
fine.
D
D
G
And
nikhil-
and
I
we
had
some
discussion-
whether
would
it
there
is
an
issue
from
a
month
or
longer
time
ago
to
whether
we
can
actually
do
something
about
allocure
service
certificate,
for
example
by
using
a
home
chart
post
installation
to
be
able
to
provide
a
self-signed
certificate
using
the
ip
address
of
the
alker,
so
that
the
certificate
is
actually
usable
for
the
first
time
and
then
have
a
recommendation
on
managing
it.
But
we
couldn't
come
up
with
a
good
solution.
So
I
don't.
G
A
G
Brings
the
question
so
there
are
some
home
configurations
for
insulated
service
right.
What
happens
if
I
use
a
terraform?
Is
there
any
way
to
set
those
configurations.
A
Some
of
the
variables
for
helm
have
been
written
into
the
terraform
provider.
I
think
we
actually
have
some
pr's
in
the
queue
right
now
to
add
extra
ones
that
haven't
been
added
yet.
Oh.
A
A
I
saw
it
the
other
day
or
is
it
now?
It's
not
been
merged,
it
might
actually
been
merged
already
sub
network
yeah,
the
sub
network,
vpc
stuff,
for
example-
that
wasn't
exposed
in
any
way
for
gpe,
so
we're
just
adding
more
options
as
we
find
them.
G
A
D
D
A
Yeah,
you
can
pass
in
a
values
file.
Yes,
you
can
so
there
you
go,
you
can
you
can
actually
have
access
to
all
of
them.
G
A
Sweet
that
makes
sense
cool.
We
are
one
minute
over,
so
we
should
probably
wrap
up
so
fantastic
well.
Thank
you.
Everyone
for
joining
cool
that
all
sounds
good.
I
appreciate
it
and
I'll.
Let
you
all
go.
I
don't
think
there's
anything
else.